00:00:00.001 Started by upstream project "autotest-per-patch" build number 132815 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.014 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:19.579 The recommended git tool is: git 00:00:19.580 using credential 00000000-0000-0000-0000-000000000002 00:00:19.582 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:19.598 Fetching changes from the remote Git repository 00:00:19.601 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:19.620 Using shallow fetch with depth 1 00:00:19.620 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:19.620 > git --version # timeout=10 00:00:19.635 > git --version # 'git version 2.39.2' 00:00:19.635 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:19.650 Setting http proxy: proxy-dmz.intel.com:911 00:00:19.650 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:25.618 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:25.633 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:25.646 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:25.646 > git config core.sparsecheckout # timeout=10 00:00:25.661 > git read-tree -mu HEAD # timeout=10 00:00:25.678 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:25.699 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:25.699 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:25.837 [Pipeline] Start of Pipeline 00:00:25.848 [Pipeline] library 00:00:25.850 Loading library shm_lib@master 00:00:25.850 Library shm_lib@master is cached. Copying from home. 00:00:25.865 [Pipeline] node 00:00:40.868 Still waiting to schedule task 00:00:40.868 Waiting for next available executor on ‘vagrant-vm-host’ 00:24:54.422 Running on VM-host-WFP1 in /var/jenkins/workspace/raid-vg-autotest 00:24:54.425 [Pipeline] { 00:24:54.436 [Pipeline] catchError 00:24:54.438 [Pipeline] { 00:24:54.457 [Pipeline] wrap 00:24:54.468 [Pipeline] { 00:24:54.479 [Pipeline] stage 00:24:54.482 [Pipeline] { (Prologue) 00:24:54.503 [Pipeline] echo 00:24:54.505 Node: VM-host-WFP1 00:24:54.513 [Pipeline] cleanWs 00:24:54.527 [WS-CLEANUP] Deleting project workspace... 00:24:54.527 [WS-CLEANUP] Deferred wipeout is used... 00:24:54.539 [WS-CLEANUP] done 00:24:54.814 [Pipeline] setCustomBuildProperty 00:24:54.931 [Pipeline] httpRequest 00:24:55.328 [Pipeline] echo 00:24:55.330 Sorcerer 10.211.164.112 is alive 00:24:55.342 [Pipeline] retry 00:24:55.344 [Pipeline] { 00:24:55.361 [Pipeline] httpRequest 00:24:55.370 HttpMethod: GET 00:24:55.370 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:24:55.371 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:24:55.380 Response Code: HTTP/1.1 200 OK 00:24:55.381 Success: Status code 200 is in the accepted range: 200,404 00:24:55.382 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:24:55.538 [Pipeline] } 00:24:55.555 [Pipeline] // retry 00:24:55.563 [Pipeline] sh 00:24:55.845 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:24:55.860 [Pipeline] httpRequest 00:24:56.245 [Pipeline] echo 00:24:56.247 Sorcerer 10.211.164.112 is alive 00:24:56.257 [Pipeline] retry 00:24:56.259 [Pipeline] { 00:24:56.275 [Pipeline] httpRequest 00:24:56.279 HttpMethod: GET 00:24:56.280 URL: http://10.211.164.112/packages/spdk_c12cb8fe35297bfebf155ee658660da0160fbc12.tar.gz 00:24:56.281 Sending request to url: http://10.211.164.112/packages/spdk_c12cb8fe35297bfebf155ee658660da0160fbc12.tar.gz 00:24:56.282 Response Code: HTTP/1.1 200 OK 00:24:56.283 Success: Status code 200 is in the accepted range: 200,404 00:24:56.283 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_c12cb8fe35297bfebf155ee658660da0160fbc12.tar.gz 00:24:58.563 [Pipeline] } 00:24:58.582 [Pipeline] // retry 00:24:58.589 [Pipeline] sh 00:24:58.870 + tar --no-same-owner -xf spdk_c12cb8fe35297bfebf155ee658660da0160fbc12.tar.gz 00:25:01.415 [Pipeline] sh 00:25:01.697 + git -C spdk log --oneline -n5 00:25:01.697 c12cb8fe3 util: add method for setting fd_group's wrapper 00:25:01.697 43c35d804 util: multi-level fd_group nesting 00:25:01.697 6336b7c5c util: keep track of nested child fd_groups 00:25:01.697 2e1d23f4b fuse_dispatcher: make header internal 00:25:01.697 3318278a6 vhost: check if vsession exists before remove scsi vdev 00:25:01.716 [Pipeline] writeFile 00:25:01.731 [Pipeline] sh 00:25:02.013 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:25:02.025 [Pipeline] sh 00:25:02.307 + cat autorun-spdk.conf 00:25:02.307 SPDK_RUN_FUNCTIONAL_TEST=1 00:25:02.307 SPDK_RUN_ASAN=1 00:25:02.307 SPDK_RUN_UBSAN=1 00:25:02.307 SPDK_TEST_RAID=1 00:25:02.307 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:02.315 RUN_NIGHTLY=0 00:25:02.317 [Pipeline] } 00:25:02.331 [Pipeline] // stage 00:25:02.345 [Pipeline] stage 00:25:02.348 [Pipeline] { (Run VM) 00:25:02.359 [Pipeline] sh 00:25:02.641 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:25:02.641 + echo 'Start stage prepare_nvme.sh' 00:25:02.641 Start stage prepare_nvme.sh 00:25:02.641 + [[ -n 2 ]] 00:25:02.641 + disk_prefix=ex2 00:25:02.641 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:25:02.641 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:25:02.641 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:25:02.641 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:25:02.641 ++ SPDK_RUN_ASAN=1 00:25:02.641 ++ SPDK_RUN_UBSAN=1 00:25:02.641 ++ SPDK_TEST_RAID=1 00:25:02.641 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:02.641 ++ RUN_NIGHTLY=0 00:25:02.641 + cd /var/jenkins/workspace/raid-vg-autotest 00:25:02.641 + nvme_files=() 00:25:02.641 + declare -A nvme_files 00:25:02.641 + backend_dir=/var/lib/libvirt/images/backends 00:25:02.641 + nvme_files['nvme.img']=5G 00:25:02.641 + nvme_files['nvme-cmb.img']=5G 00:25:02.641 + nvme_files['nvme-multi0.img']=4G 00:25:02.641 + nvme_files['nvme-multi1.img']=4G 00:25:02.641 + nvme_files['nvme-multi2.img']=4G 00:25:02.641 + nvme_files['nvme-openstack.img']=8G 00:25:02.641 + nvme_files['nvme-zns.img']=5G 00:25:02.641 + (( SPDK_TEST_NVME_PMR == 1 )) 00:25:02.641 + (( SPDK_TEST_FTL == 1 )) 00:25:02.641 + (( SPDK_TEST_NVME_FDP == 1 )) 00:25:02.641 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:25:02.641 + for nvme in "${!nvme_files[@]}" 00:25:02.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:25:02.641 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:25:02.641 + for nvme in "${!nvme_files[@]}" 00:25:02.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:25:02.641 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:25:02.641 + for nvme in "${!nvme_files[@]}" 00:25:02.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:25:02.641 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:25:02.641 + for nvme in "${!nvme_files[@]}" 00:25:02.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:25:02.641 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:25:02.641 + for nvme in "${!nvme_files[@]}" 00:25:02.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:25:02.641 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:25:02.641 + for nvme in "${!nvme_files[@]}" 00:25:02.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:25:02.899 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:25:02.899 + for nvme in "${!nvme_files[@]}" 00:25:02.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:25:02.899 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:25:02.899 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:25:02.899 + echo 'End stage prepare_nvme.sh' 00:25:02.899 End stage prepare_nvme.sh 00:25:02.909 [Pipeline] sh 00:25:03.188 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:25:03.189 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:25:03.189 00:25:03.189 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:25:03.189 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:25:03.189 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:25:03.189 HELP=0 00:25:03.189 DRY_RUN=0 00:25:03.189 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:25:03.189 NVME_DISKS_TYPE=nvme,nvme, 00:25:03.189 NVME_AUTO_CREATE=0 00:25:03.189 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:25:03.189 NVME_CMB=,, 00:25:03.189 NVME_PMR=,, 00:25:03.189 NVME_ZNS=,, 00:25:03.189 NVME_MS=,, 00:25:03.189 NVME_FDP=,, 00:25:03.189 SPDK_VAGRANT_DISTRO=fedora39 00:25:03.189 SPDK_VAGRANT_VMCPU=10 00:25:03.189 SPDK_VAGRANT_VMRAM=12288 00:25:03.189 SPDK_VAGRANT_PROVIDER=libvirt 00:25:03.189 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:25:03.189 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:25:03.189 SPDK_OPENSTACK_NETWORK=0 00:25:03.189 VAGRANT_PACKAGE_BOX=0 00:25:03.189 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:25:03.189 FORCE_DISTRO=true 00:25:03.189 VAGRANT_BOX_VERSION= 00:25:03.189 EXTRA_VAGRANTFILES= 00:25:03.189 NIC_MODEL=e1000 00:25:03.189 00:25:03.189 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:25:03.189 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:25:05.733 Bringing machine 'default' up with 'libvirt' provider... 00:25:07.638 ==> default: Creating image (snapshot of base box volume). 00:25:07.638 ==> default: Creating domain with the following settings... 00:25:07.638 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733785606_2bf6f063d5610259c378 00:25:07.638 ==> default: -- Domain type: kvm 00:25:07.638 ==> default: -- Cpus: 10 00:25:07.638 ==> default: -- Feature: acpi 00:25:07.638 ==> default: -- Feature: apic 00:25:07.638 ==> default: -- Feature: pae 00:25:07.638 ==> default: -- Memory: 12288M 00:25:07.638 ==> default: -- Memory Backing: hugepages: 00:25:07.638 ==> default: -- Management MAC: 00:25:07.638 ==> default: -- Loader: 00:25:07.638 ==> default: -- Nvram: 00:25:07.638 ==> default: -- Base box: spdk/fedora39 00:25:07.638 ==> default: -- Storage pool: default 00:25:07.638 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733785606_2bf6f063d5610259c378.img (20G) 00:25:07.638 ==> default: -- Volume Cache: default 00:25:07.638 ==> default: -- Kernel: 00:25:07.638 ==> default: -- Initrd: 00:25:07.638 ==> default: -- Graphics Type: vnc 00:25:07.638 ==> default: -- Graphics Port: -1 00:25:07.638 ==> default: -- Graphics IP: 127.0.0.1 00:25:07.638 ==> default: -- Graphics Password: Not defined 00:25:07.638 ==> default: -- Video Type: cirrus 00:25:07.638 ==> default: -- Video VRAM: 9216 00:25:07.638 ==> default: -- Sound Type: 00:25:07.638 ==> default: -- Keymap: en-us 00:25:07.638 ==> default: -- TPM Path: 00:25:07.638 ==> default: -- INPUT: type=mouse, bus=ps2 00:25:07.638 ==> default: -- Command line args: 00:25:07.638 ==> default: -> value=-device, 00:25:07.638 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:25:07.638 ==> default: -> value=-drive, 00:25:07.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:25:07.638 ==> default: -> value=-device, 00:25:07.638 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:07.638 ==> default: -> value=-device, 00:25:07.638 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:25:07.638 ==> default: -> value=-drive, 00:25:07.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:25:07.638 ==> default: -> value=-device, 00:25:07.638 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:07.638 ==> default: -> value=-drive, 00:25:07.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:25:07.638 ==> default: -> value=-device, 00:25:07.638 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:07.638 ==> default: -> value=-drive, 00:25:07.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:25:07.638 ==> default: -> value=-device, 00:25:07.638 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:07.897 ==> default: Creating shared folders metadata... 00:25:07.897 ==> default: Starting domain. 00:25:09.802 ==> default: Waiting for domain to get an IP address... 00:25:27.951 ==> default: Waiting for SSH to become available... 00:25:29.326 ==> default: Configuring and enabling network interfaces... 00:25:34.602 default: SSH address: 192.168.121.126:22 00:25:34.602 default: SSH username: vagrant 00:25:34.602 default: SSH auth method: private key 00:25:37.139 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:25:47.131 ==> default: Mounting SSHFS shared folder... 00:25:48.067 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:25:48.067 ==> default: Checking Mount.. 00:25:50.042 ==> default: Folder Successfully Mounted! 00:25:50.042 ==> default: Running provisioner: file... 00:25:50.977 default: ~/.gitconfig => .gitconfig 00:25:51.245 00:25:51.245 SUCCESS! 00:25:51.245 00:25:51.245 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:25:51.245 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:25:51.245 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:25:51.245 00:25:51.280 [Pipeline] } 00:25:51.302 [Pipeline] // stage 00:25:51.307 [Pipeline] dir 00:25:51.307 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:25:51.308 [Pipeline] { 00:25:51.315 [Pipeline] catchError 00:25:51.316 [Pipeline] { 00:25:51.323 [Pipeline] sh 00:25:51.596 + vagrant ssh-config --host vagrant 00:25:51.596 + sed -ne /^Host/,$p 00:25:51.596 + tee ssh_conf 00:25:54.878 Host vagrant 00:25:54.878 HostName 192.168.121.126 00:25:54.878 User vagrant 00:25:54.878 Port 22 00:25:54.878 UserKnownHostsFile /dev/null 00:25:54.878 StrictHostKeyChecking no 00:25:54.878 PasswordAuthentication no 00:25:54.878 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:25:54.878 IdentitiesOnly yes 00:25:54.878 LogLevel FATAL 00:25:54.878 ForwardAgent yes 00:25:54.878 ForwardX11 yes 00:25:54.878 00:25:54.890 [Pipeline] withEnv 00:25:54.892 [Pipeline] { 00:25:54.904 [Pipeline] sh 00:25:55.181 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:25:55.181 source /etc/os-release 00:25:55.181 [[ -e /image.version ]] && img=$(< /image.version) 00:25:55.181 # Minimal, systemd-like check. 00:25:55.181 if [[ -e /.dockerenv ]]; then 00:25:55.181 # Clear garbage from the node's name: 00:25:55.181 # agt-er_autotest_547-896 -> autotest_547-896 00:25:55.181 # $HOSTNAME is the actual container id 00:25:55.182 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:25:55.182 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:25:55.182 # We can assume this is a mount from a host where container is running, 00:25:55.182 # so fetch its hostname to easily identify the target swarm worker. 00:25:55.182 container="$(< /etc/hostname) ($agent)" 00:25:55.182 else 00:25:55.182 # Fallback 00:25:55.182 container=$agent 00:25:55.182 fi 00:25:55.182 fi 00:25:55.182 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:25:55.182 00:25:55.452 [Pipeline] } 00:25:55.467 [Pipeline] // withEnv 00:25:55.475 [Pipeline] setCustomBuildProperty 00:25:55.488 [Pipeline] stage 00:25:55.490 [Pipeline] { (Tests) 00:25:55.505 [Pipeline] sh 00:25:55.786 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:25:56.059 [Pipeline] sh 00:25:56.341 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:25:56.619 [Pipeline] timeout 00:25:56.620 Timeout set to expire in 1 hr 30 min 00:25:56.622 [Pipeline] { 00:25:56.643 [Pipeline] sh 00:25:56.928 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:25:57.496 HEAD is now at c12cb8fe3 util: add method for setting fd_group's wrapper 00:25:57.508 [Pipeline] sh 00:25:57.787 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:25:58.058 [Pipeline] sh 00:25:58.340 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:25:58.614 [Pipeline] sh 00:25:58.894 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:25:59.153 ++ readlink -f spdk_repo 00:25:59.153 + DIR_ROOT=/home/vagrant/spdk_repo 00:25:59.153 + [[ -n /home/vagrant/spdk_repo ]] 00:25:59.153 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:25:59.153 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:25:59.153 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:25:59.154 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:25:59.154 + [[ -d /home/vagrant/spdk_repo/output ]] 00:25:59.154 + [[ raid-vg-autotest == pkgdep-* ]] 00:25:59.154 + cd /home/vagrant/spdk_repo 00:25:59.154 + source /etc/os-release 00:25:59.154 ++ NAME='Fedora Linux' 00:25:59.154 ++ VERSION='39 (Cloud Edition)' 00:25:59.154 ++ ID=fedora 00:25:59.154 ++ VERSION_ID=39 00:25:59.154 ++ VERSION_CODENAME= 00:25:59.154 ++ PLATFORM_ID=platform:f39 00:25:59.154 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:25:59.154 ++ ANSI_COLOR='0;38;2;60;110;180' 00:25:59.154 ++ LOGO=fedora-logo-icon 00:25:59.154 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:25:59.154 ++ HOME_URL=https://fedoraproject.org/ 00:25:59.154 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:25:59.154 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:25:59.154 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:25:59.154 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:25:59.154 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:25:59.154 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:25:59.154 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:25:59.154 ++ SUPPORT_END=2024-11-12 00:25:59.154 ++ VARIANT='Cloud Edition' 00:25:59.154 ++ VARIANT_ID=cloud 00:25:59.154 + uname -a 00:25:59.154 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:25:59.154 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:25:59.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.733 Hugepages 00:25:59.733 node hugesize free / total 00:25:59.733 node0 1048576kB 0 / 0 00:25:59.733 node0 2048kB 0 / 0 00:25:59.733 00:25:59.733 Type BDF Vendor Device NUMA Driver Device Block devices 00:25:59.733 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:25:59.733 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:25:59.733 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:25:59.733 + rm -f /tmp/spdk-ld-path 00:25:59.733 + source autorun-spdk.conf 00:25:59.733 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:25:59.733 ++ SPDK_RUN_ASAN=1 00:25:59.733 ++ SPDK_RUN_UBSAN=1 00:25:59.733 ++ SPDK_TEST_RAID=1 00:25:59.733 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:59.733 ++ RUN_NIGHTLY=0 00:25:59.733 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:25:59.733 + [[ -n '' ]] 00:25:59.733 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:25:59.733 + for M in /var/spdk/build-*-manifest.txt 00:25:59.733 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:25:59.733 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:25:59.733 + for M in /var/spdk/build-*-manifest.txt 00:25:59.733 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:25:59.733 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:25:59.733 + for M in /var/spdk/build-*-manifest.txt 00:25:59.733 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:25:59.733 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:25:59.733 ++ uname 00:25:59.733 + [[ Linux == \L\i\n\u\x ]] 00:25:59.733 + sudo dmesg -T 00:25:59.733 + sudo dmesg --clear 00:25:59.993 + dmesg_pid=5205 00:25:59.993 + [[ Fedora Linux == FreeBSD ]] 00:25:59.993 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:59.993 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:59.993 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:25:59.993 + [[ -x /usr/src/fio-static/fio ]] 00:25:59.993 + sudo dmesg -Tw 00:25:59.993 + export FIO_BIN=/usr/src/fio-static/fio 00:25:59.993 + FIO_BIN=/usr/src/fio-static/fio 00:25:59.993 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:25:59.993 + [[ ! -v VFIO_QEMU_BIN ]] 00:25:59.993 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:25:59.993 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:25:59.993 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:25:59.993 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:25:59.993 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:25:59.993 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:25:59.993 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:25:59.993 23:07:40 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:25:59.993 23:07:40 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:25:59.993 23:07:40 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:25:59.993 23:07:40 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:25:59.993 23:07:40 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:25:59.993 23:07:40 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:25:59.993 23:07:40 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:59.993 23:07:40 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:25:59.993 23:07:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:25:59.993 23:07:40 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:25:59.993 23:07:40 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:25:59.993 23:07:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.993 23:07:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:25:59.993 23:07:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:59.993 23:07:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.993 23:07:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.993 23:07:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.993 23:07:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.993 23:07:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.993 23:07:40 -- paths/export.sh@5 -- $ export PATH 00:25:59.993 23:07:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.993 23:07:40 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:59.993 23:07:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:25:59.993 23:07:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733785660.XXXXXX 00:25:59.993 23:07:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733785660.RvVas7 00:25:59.993 23:07:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:25:59.993 23:07:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:25:59.993 23:07:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:25:59.993 23:07:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:59.993 23:07:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:59.993 23:07:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:25:59.993 23:07:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:25:59.993 23:07:40 -- common/autotest_common.sh@10 -- $ set +x 00:25:59.993 23:07:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:25:59.993 23:07:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:25:59.993 23:07:40 -- pm/common@17 -- $ local monitor 00:25:59.993 23:07:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:59.993 23:07:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:59.993 23:07:40 -- pm/common@25 -- $ sleep 1 00:25:59.993 23:07:40 -- pm/common@21 -- $ date +%s 00:25:59.993 23:07:40 -- pm/common@21 -- $ date +%s 00:25:59.993 23:07:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733785660 00:25:59.993 23:07:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733785660 00:26:00.251 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733785660_collect-cpu-load.pm.log 00:26:00.251 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733785660_collect-vmstat.pm.log 00:26:01.188 23:07:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:26:01.188 23:07:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:26:01.188 23:07:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:26:01.188 23:07:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:01.188 23:07:41 -- spdk/autobuild.sh@16 -- $ date -u 00:26:01.188 Mon Dec 9 11:07:41 PM UTC 2024 00:26:01.188 23:07:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:26:01.188 v25.01-pre-316-gc12cb8fe3 00:26:01.188 23:07:41 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:26:01.188 23:07:41 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:26:01.188 23:07:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:26:01.188 23:07:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:26:01.188 23:07:41 -- common/autotest_common.sh@10 -- $ set +x 00:26:01.188 ************************************ 00:26:01.188 START TEST asan 00:26:01.188 ************************************ 00:26:01.188 using asan 00:26:01.188 23:07:41 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:26:01.188 00:26:01.188 real 0m0.001s 00:26:01.188 user 0m0.001s 00:26:01.188 sys 0m0.000s 00:26:01.188 23:07:41 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:26:01.188 23:07:41 asan -- common/autotest_common.sh@10 -- $ set +x 00:26:01.188 ************************************ 00:26:01.188 END TEST asan 00:26:01.188 ************************************ 00:26:01.188 23:07:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:26:01.188 23:07:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:26:01.188 23:07:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:26:01.188 23:07:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:26:01.188 23:07:41 -- common/autotest_common.sh@10 -- $ set +x 00:26:01.188 ************************************ 00:26:01.188 START TEST ubsan 00:26:01.188 ************************************ 00:26:01.188 using ubsan 00:26:01.188 23:07:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:26:01.188 00:26:01.188 real 0m0.001s 00:26:01.188 user 0m0.000s 00:26:01.188 sys 0m0.000s 00:26:01.188 23:07:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:26:01.188 23:07:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:26:01.188 ************************************ 00:26:01.188 END TEST ubsan 00:26:01.188 ************************************ 00:26:01.188 23:07:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:26:01.188 23:07:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:26:01.188 23:07:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:26:01.188 23:07:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:26:01.188 23:07:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:26:01.188 23:07:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:26:01.188 23:07:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:26:01.188 23:07:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:26:01.188 23:07:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:26:01.447 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:01.447 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:02.012 Using 'verbs' RDMA provider 00:26:17.834 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:26:35.916 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:26:35.916 Creating mk/config.mk...done. 00:26:35.916 Creating mk/cc.flags.mk...done. 00:26:35.916 Type 'make' to build. 00:26:35.916 23:08:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:26:35.916 23:08:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:26:35.916 23:08:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:26:35.916 23:08:14 -- common/autotest_common.sh@10 -- $ set +x 00:26:35.916 ************************************ 00:26:35.916 START TEST make 00:26:35.916 ************************************ 00:26:35.916 23:08:14 make -- common/autotest_common.sh@1129 -- $ make -j10 00:26:35.916 make[1]: Nothing to be done for 'all'. 00:26:45.895 The Meson build system 00:26:45.895 Version: 1.5.0 00:26:45.895 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:26:45.895 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:26:45.895 Build type: native build 00:26:45.895 Program cat found: YES (/usr/bin/cat) 00:26:45.895 Project name: DPDK 00:26:45.895 Project version: 24.03.0 00:26:45.895 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:26:45.895 C linker for the host machine: cc ld.bfd 2.40-14 00:26:45.895 Host machine cpu family: x86_64 00:26:45.895 Host machine cpu: x86_64 00:26:45.895 Message: ## Building in Developer Mode ## 00:26:45.895 Program pkg-config found: YES (/usr/bin/pkg-config) 00:26:45.895 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:26:45.895 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:26:45.895 Program python3 found: YES (/usr/bin/python3) 00:26:45.895 Program cat found: YES (/usr/bin/cat) 00:26:45.895 Compiler for C supports arguments -march=native: YES 00:26:45.895 Checking for size of "void *" : 8 00:26:45.895 Checking for size of "void *" : 8 (cached) 00:26:45.895 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:26:45.895 Library m found: YES 00:26:45.895 Library numa found: YES 00:26:45.895 Has header "numaif.h" : YES 00:26:45.895 Library fdt found: NO 00:26:45.895 Library execinfo found: NO 00:26:45.895 Has header "execinfo.h" : YES 00:26:45.895 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:26:45.895 Run-time dependency libarchive found: NO (tried pkgconfig) 00:26:45.895 Run-time dependency libbsd found: NO (tried pkgconfig) 00:26:45.895 Run-time dependency jansson found: NO (tried pkgconfig) 00:26:45.895 Run-time dependency openssl found: YES 3.1.1 00:26:45.895 Run-time dependency libpcap found: YES 1.10.4 00:26:45.895 Has header "pcap.h" with dependency libpcap: YES 00:26:45.895 Compiler for C supports arguments -Wcast-qual: YES 00:26:45.895 Compiler for C supports arguments -Wdeprecated: YES 00:26:45.895 Compiler for C supports arguments -Wformat: YES 00:26:45.895 Compiler for C supports arguments -Wformat-nonliteral: NO 00:26:45.895 Compiler for C supports arguments -Wformat-security: NO 00:26:45.895 Compiler for C supports arguments -Wmissing-declarations: YES 00:26:45.895 Compiler for C supports arguments -Wmissing-prototypes: YES 00:26:45.895 Compiler for C supports arguments -Wnested-externs: YES 00:26:45.895 Compiler for C supports arguments -Wold-style-definition: YES 00:26:45.895 Compiler for C supports arguments -Wpointer-arith: YES 00:26:45.895 Compiler for C supports arguments -Wsign-compare: YES 00:26:45.895 Compiler for C supports arguments -Wstrict-prototypes: YES 00:26:45.895 Compiler for C supports arguments -Wundef: YES 00:26:45.895 Compiler for C supports arguments -Wwrite-strings: YES 00:26:45.895 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:26:45.895 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:26:45.895 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:26:45.895 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:26:45.895 Program objdump found: YES (/usr/bin/objdump) 00:26:45.895 Compiler for C supports arguments -mavx512f: YES 00:26:45.895 Checking if "AVX512 checking" compiles: YES 00:26:45.895 Fetching value of define "__SSE4_2__" : 1 00:26:45.895 Fetching value of define "__AES__" : 1 00:26:45.895 Fetching value of define "__AVX__" : 1 00:26:45.895 Fetching value of define "__AVX2__" : 1 00:26:45.895 Fetching value of define "__AVX512BW__" : 1 00:26:45.895 Fetching value of define "__AVX512CD__" : 1 00:26:45.895 Fetching value of define "__AVX512DQ__" : 1 00:26:45.895 Fetching value of define "__AVX512F__" : 1 00:26:45.895 Fetching value of define "__AVX512VL__" : 1 00:26:45.895 Fetching value of define "__PCLMUL__" : 1 00:26:45.895 Fetching value of define "__RDRND__" : 1 00:26:45.895 Fetching value of define "__RDSEED__" : 1 00:26:45.895 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:26:45.895 Fetching value of define "__znver1__" : (undefined) 00:26:45.895 Fetching value of define "__znver2__" : (undefined) 00:26:45.895 Fetching value of define "__znver3__" : (undefined) 00:26:45.895 Fetching value of define "__znver4__" : (undefined) 00:26:45.895 Library asan found: YES 00:26:45.895 Compiler for C supports arguments -Wno-format-truncation: YES 00:26:45.895 Message: lib/log: Defining dependency "log" 00:26:45.895 Message: lib/kvargs: Defining dependency "kvargs" 00:26:45.895 Message: lib/telemetry: Defining dependency "telemetry" 00:26:45.895 Library rt found: YES 00:26:45.895 Checking for function "getentropy" : NO 00:26:45.895 Message: lib/eal: Defining dependency "eal" 00:26:45.895 Message: lib/ring: Defining dependency "ring" 00:26:45.895 Message: lib/rcu: Defining dependency "rcu" 00:26:45.895 Message: lib/mempool: Defining dependency "mempool" 00:26:45.895 Message: lib/mbuf: Defining dependency "mbuf" 00:26:45.895 Fetching value of define "__PCLMUL__" : 1 (cached) 00:26:45.895 Fetching value of define "__AVX512F__" : 1 (cached) 00:26:45.895 Fetching value of define "__AVX512BW__" : 1 (cached) 00:26:45.896 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:26:45.896 Fetching value of define "__AVX512VL__" : 1 (cached) 00:26:45.896 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:26:45.896 Compiler for C supports arguments -mpclmul: YES 00:26:45.896 Compiler for C supports arguments -maes: YES 00:26:45.896 Compiler for C supports arguments -mavx512f: YES (cached) 00:26:45.896 Compiler for C supports arguments -mavx512bw: YES 00:26:45.896 Compiler for C supports arguments -mavx512dq: YES 00:26:45.896 Compiler for C supports arguments -mavx512vl: YES 00:26:45.896 Compiler for C supports arguments -mvpclmulqdq: YES 00:26:45.896 Compiler for C supports arguments -mavx2: YES 00:26:45.896 Compiler for C supports arguments -mavx: YES 00:26:45.896 Message: lib/net: Defining dependency "net" 00:26:45.896 Message: lib/meter: Defining dependency "meter" 00:26:45.896 Message: lib/ethdev: Defining dependency "ethdev" 00:26:45.896 Message: lib/pci: Defining dependency "pci" 00:26:45.896 Message: lib/cmdline: Defining dependency "cmdline" 00:26:45.896 Message: lib/hash: Defining dependency "hash" 00:26:45.896 Message: lib/timer: Defining dependency "timer" 00:26:45.896 Message: lib/compressdev: Defining dependency "compressdev" 00:26:45.896 Message: lib/cryptodev: Defining dependency "cryptodev" 00:26:45.896 Message: lib/dmadev: Defining dependency "dmadev" 00:26:45.896 Compiler for C supports arguments -Wno-cast-qual: YES 00:26:45.896 Message: lib/power: Defining dependency "power" 00:26:45.896 Message: lib/reorder: Defining dependency "reorder" 00:26:45.896 Message: lib/security: Defining dependency "security" 00:26:45.896 Has header "linux/userfaultfd.h" : YES 00:26:45.896 Has header "linux/vduse.h" : YES 00:26:45.896 Message: lib/vhost: Defining dependency "vhost" 00:26:45.896 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:26:45.896 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:26:45.896 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:26:45.896 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:26:45.896 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:26:45.896 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:26:45.896 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:26:45.896 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:26:45.896 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:26:45.896 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:26:45.896 Program doxygen found: YES (/usr/local/bin/doxygen) 00:26:45.896 Configuring doxy-api-html.conf using configuration 00:26:45.896 Configuring doxy-api-man.conf using configuration 00:26:45.896 Program mandb found: YES (/usr/bin/mandb) 00:26:45.896 Program sphinx-build found: NO 00:26:45.896 Configuring rte_build_config.h using configuration 00:26:45.896 Message: 00:26:45.896 ================= 00:26:45.896 Applications Enabled 00:26:45.896 ================= 00:26:45.896 00:26:45.896 apps: 00:26:45.896 00:26:45.896 00:26:45.896 Message: 00:26:45.896 ================= 00:26:45.896 Libraries Enabled 00:26:45.896 ================= 00:26:45.896 00:26:45.896 libs: 00:26:45.896 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:26:45.896 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:26:45.896 cryptodev, dmadev, power, reorder, security, vhost, 00:26:45.896 00:26:45.896 Message: 00:26:45.896 =============== 00:26:45.896 Drivers Enabled 00:26:45.896 =============== 00:26:45.896 00:26:45.896 common: 00:26:45.896 00:26:45.896 bus: 00:26:45.896 pci, vdev, 00:26:45.896 mempool: 00:26:45.896 ring, 00:26:45.896 dma: 00:26:45.896 00:26:45.896 net: 00:26:45.896 00:26:45.896 crypto: 00:26:45.896 00:26:45.896 compress: 00:26:45.896 00:26:45.896 vdpa: 00:26:45.896 00:26:45.896 00:26:45.896 Message: 00:26:45.896 ================= 00:26:45.896 Content Skipped 00:26:45.896 ================= 00:26:45.896 00:26:45.896 apps: 00:26:45.896 dumpcap: explicitly disabled via build config 00:26:45.896 graph: explicitly disabled via build config 00:26:45.896 pdump: explicitly disabled via build config 00:26:45.896 proc-info: explicitly disabled via build config 00:26:45.896 test-acl: explicitly disabled via build config 00:26:45.896 test-bbdev: explicitly disabled via build config 00:26:45.896 test-cmdline: explicitly disabled via build config 00:26:45.896 test-compress-perf: explicitly disabled via build config 00:26:45.896 test-crypto-perf: explicitly disabled via build config 00:26:45.896 test-dma-perf: explicitly disabled via build config 00:26:45.896 test-eventdev: explicitly disabled via build config 00:26:45.896 test-fib: explicitly disabled via build config 00:26:45.896 test-flow-perf: explicitly disabled via build config 00:26:45.896 test-gpudev: explicitly disabled via build config 00:26:45.896 test-mldev: explicitly disabled via build config 00:26:45.896 test-pipeline: explicitly disabled via build config 00:26:45.896 test-pmd: explicitly disabled via build config 00:26:45.896 test-regex: explicitly disabled via build config 00:26:45.896 test-sad: explicitly disabled via build config 00:26:45.896 test-security-perf: explicitly disabled via build config 00:26:45.896 00:26:45.896 libs: 00:26:45.896 argparse: explicitly disabled via build config 00:26:45.896 metrics: explicitly disabled via build config 00:26:45.896 acl: explicitly disabled via build config 00:26:45.896 bbdev: explicitly disabled via build config 00:26:45.896 bitratestats: explicitly disabled via build config 00:26:45.896 bpf: explicitly disabled via build config 00:26:45.896 cfgfile: explicitly disabled via build config 00:26:45.896 distributor: explicitly disabled via build config 00:26:45.896 efd: explicitly disabled via build config 00:26:45.896 eventdev: explicitly disabled via build config 00:26:45.896 dispatcher: explicitly disabled via build config 00:26:45.896 gpudev: explicitly disabled via build config 00:26:45.896 gro: explicitly disabled via build config 00:26:45.896 gso: explicitly disabled via build config 00:26:45.896 ip_frag: explicitly disabled via build config 00:26:45.896 jobstats: explicitly disabled via build config 00:26:45.896 latencystats: explicitly disabled via build config 00:26:45.896 lpm: explicitly disabled via build config 00:26:45.896 member: explicitly disabled via build config 00:26:45.896 pcapng: explicitly disabled via build config 00:26:45.896 rawdev: explicitly disabled via build config 00:26:45.896 regexdev: explicitly disabled via build config 00:26:45.896 mldev: explicitly disabled via build config 00:26:45.896 rib: explicitly disabled via build config 00:26:45.896 sched: explicitly disabled via build config 00:26:45.896 stack: explicitly disabled via build config 00:26:45.896 ipsec: explicitly disabled via build config 00:26:45.896 pdcp: explicitly disabled via build config 00:26:45.896 fib: explicitly disabled via build config 00:26:45.896 port: explicitly disabled via build config 00:26:45.896 pdump: explicitly disabled via build config 00:26:45.896 table: explicitly disabled via build config 00:26:45.896 pipeline: explicitly disabled via build config 00:26:45.896 graph: explicitly disabled via build config 00:26:45.896 node: explicitly disabled via build config 00:26:45.896 00:26:45.896 drivers: 00:26:45.896 common/cpt: not in enabled drivers build config 00:26:45.896 common/dpaax: not in enabled drivers build config 00:26:45.896 common/iavf: not in enabled drivers build config 00:26:45.896 common/idpf: not in enabled drivers build config 00:26:45.896 common/ionic: not in enabled drivers build config 00:26:45.896 common/mvep: not in enabled drivers build config 00:26:45.896 common/octeontx: not in enabled drivers build config 00:26:45.896 bus/auxiliary: not in enabled drivers build config 00:26:45.896 bus/cdx: not in enabled drivers build config 00:26:45.896 bus/dpaa: not in enabled drivers build config 00:26:45.896 bus/fslmc: not in enabled drivers build config 00:26:45.896 bus/ifpga: not in enabled drivers build config 00:26:45.896 bus/platform: not in enabled drivers build config 00:26:45.896 bus/uacce: not in enabled drivers build config 00:26:45.896 bus/vmbus: not in enabled drivers build config 00:26:45.896 common/cnxk: not in enabled drivers build config 00:26:45.896 common/mlx5: not in enabled drivers build config 00:26:45.896 common/nfp: not in enabled drivers build config 00:26:45.896 common/nitrox: not in enabled drivers build config 00:26:45.896 common/qat: not in enabled drivers build config 00:26:45.896 common/sfc_efx: not in enabled drivers build config 00:26:45.896 mempool/bucket: not in enabled drivers build config 00:26:45.897 mempool/cnxk: not in enabled drivers build config 00:26:45.897 mempool/dpaa: not in enabled drivers build config 00:26:45.897 mempool/dpaa2: not in enabled drivers build config 00:26:45.897 mempool/octeontx: not in enabled drivers build config 00:26:45.897 mempool/stack: not in enabled drivers build config 00:26:45.897 dma/cnxk: not in enabled drivers build config 00:26:45.897 dma/dpaa: not in enabled drivers build config 00:26:45.897 dma/dpaa2: not in enabled drivers build config 00:26:45.897 dma/hisilicon: not in enabled drivers build config 00:26:45.897 dma/idxd: not in enabled drivers build config 00:26:45.897 dma/ioat: not in enabled drivers build config 00:26:45.897 dma/skeleton: not in enabled drivers build config 00:26:45.897 net/af_packet: not in enabled drivers build config 00:26:45.897 net/af_xdp: not in enabled drivers build config 00:26:45.897 net/ark: not in enabled drivers build config 00:26:45.897 net/atlantic: not in enabled drivers build config 00:26:45.897 net/avp: not in enabled drivers build config 00:26:45.897 net/axgbe: not in enabled drivers build config 00:26:45.897 net/bnx2x: not in enabled drivers build config 00:26:45.897 net/bnxt: not in enabled drivers build config 00:26:45.897 net/bonding: not in enabled drivers build config 00:26:45.897 net/cnxk: not in enabled drivers build config 00:26:45.897 net/cpfl: not in enabled drivers build config 00:26:45.897 net/cxgbe: not in enabled drivers build config 00:26:45.897 net/dpaa: not in enabled drivers build config 00:26:45.897 net/dpaa2: not in enabled drivers build config 00:26:45.897 net/e1000: not in enabled drivers build config 00:26:45.897 net/ena: not in enabled drivers build config 00:26:45.897 net/enetc: not in enabled drivers build config 00:26:45.897 net/enetfec: not in enabled drivers build config 00:26:45.897 net/enic: not in enabled drivers build config 00:26:45.897 net/failsafe: not in enabled drivers build config 00:26:45.897 net/fm10k: not in enabled drivers build config 00:26:45.897 net/gve: not in enabled drivers build config 00:26:45.897 net/hinic: not in enabled drivers build config 00:26:45.897 net/hns3: not in enabled drivers build config 00:26:45.897 net/i40e: not in enabled drivers build config 00:26:45.897 net/iavf: not in enabled drivers build config 00:26:45.897 net/ice: not in enabled drivers build config 00:26:45.897 net/idpf: not in enabled drivers build config 00:26:45.897 net/igc: not in enabled drivers build config 00:26:45.897 net/ionic: not in enabled drivers build config 00:26:45.897 net/ipn3ke: not in enabled drivers build config 00:26:45.897 net/ixgbe: not in enabled drivers build config 00:26:45.897 net/mana: not in enabled drivers build config 00:26:45.897 net/memif: not in enabled drivers build config 00:26:45.897 net/mlx4: not in enabled drivers build config 00:26:45.897 net/mlx5: not in enabled drivers build config 00:26:45.897 net/mvneta: not in enabled drivers build config 00:26:45.897 net/mvpp2: not in enabled drivers build config 00:26:45.897 net/netvsc: not in enabled drivers build config 00:26:45.897 net/nfb: not in enabled drivers build config 00:26:45.897 net/nfp: not in enabled drivers build config 00:26:45.897 net/ngbe: not in enabled drivers build config 00:26:45.897 net/null: not in enabled drivers build config 00:26:45.897 net/octeontx: not in enabled drivers build config 00:26:45.897 net/octeon_ep: not in enabled drivers build config 00:26:45.897 net/pcap: not in enabled drivers build config 00:26:45.897 net/pfe: not in enabled drivers build config 00:26:45.897 net/qede: not in enabled drivers build config 00:26:45.897 net/ring: not in enabled drivers build config 00:26:45.897 net/sfc: not in enabled drivers build config 00:26:45.897 net/softnic: not in enabled drivers build config 00:26:45.897 net/tap: not in enabled drivers build config 00:26:45.897 net/thunderx: not in enabled drivers build config 00:26:45.897 net/txgbe: not in enabled drivers build config 00:26:45.897 net/vdev_netvsc: not in enabled drivers build config 00:26:45.897 net/vhost: not in enabled drivers build config 00:26:45.897 net/virtio: not in enabled drivers build config 00:26:45.897 net/vmxnet3: not in enabled drivers build config 00:26:45.897 raw/*: missing internal dependency, "rawdev" 00:26:45.897 crypto/armv8: not in enabled drivers build config 00:26:45.897 crypto/bcmfs: not in enabled drivers build config 00:26:45.897 crypto/caam_jr: not in enabled drivers build config 00:26:45.897 crypto/ccp: not in enabled drivers build config 00:26:45.897 crypto/cnxk: not in enabled drivers build config 00:26:45.897 crypto/dpaa_sec: not in enabled drivers build config 00:26:45.897 crypto/dpaa2_sec: not in enabled drivers build config 00:26:45.897 crypto/ipsec_mb: not in enabled drivers build config 00:26:45.897 crypto/mlx5: not in enabled drivers build config 00:26:45.897 crypto/mvsam: not in enabled drivers build config 00:26:45.897 crypto/nitrox: not in enabled drivers build config 00:26:45.897 crypto/null: not in enabled drivers build config 00:26:45.897 crypto/octeontx: not in enabled drivers build config 00:26:45.897 crypto/openssl: not in enabled drivers build config 00:26:45.897 crypto/scheduler: not in enabled drivers build config 00:26:45.897 crypto/uadk: not in enabled drivers build config 00:26:45.897 crypto/virtio: not in enabled drivers build config 00:26:45.897 compress/isal: not in enabled drivers build config 00:26:45.897 compress/mlx5: not in enabled drivers build config 00:26:45.897 compress/nitrox: not in enabled drivers build config 00:26:45.897 compress/octeontx: not in enabled drivers build config 00:26:45.897 compress/zlib: not in enabled drivers build config 00:26:45.897 regex/*: missing internal dependency, "regexdev" 00:26:45.897 ml/*: missing internal dependency, "mldev" 00:26:45.897 vdpa/ifc: not in enabled drivers build config 00:26:45.897 vdpa/mlx5: not in enabled drivers build config 00:26:45.897 vdpa/nfp: not in enabled drivers build config 00:26:45.897 vdpa/sfc: not in enabled drivers build config 00:26:45.897 event/*: missing internal dependency, "eventdev" 00:26:45.897 baseband/*: missing internal dependency, "bbdev" 00:26:45.897 gpu/*: missing internal dependency, "gpudev" 00:26:45.897 00:26:45.897 00:26:45.897 Build targets in project: 85 00:26:45.897 00:26:45.897 DPDK 24.03.0 00:26:45.897 00:26:45.897 User defined options 00:26:45.897 buildtype : debug 00:26:45.897 default_library : shared 00:26:45.897 libdir : lib 00:26:45.897 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:45.897 b_sanitize : address 00:26:45.897 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:26:45.897 c_link_args : 00:26:45.897 cpu_instruction_set: native 00:26:45.897 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:26:45.897 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:26:45.897 enable_docs : false 00:26:45.897 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:26:45.897 enable_kmods : false 00:26:45.897 max_lcores : 128 00:26:45.897 tests : false 00:26:45.897 00:26:45.897 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:26:45.897 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:26:45.897 [1/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:26:45.897 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:26:45.897 [3/268] Linking static target lib/librte_log.a 00:26:45.897 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:26:45.897 [5/268] Linking static target lib/librte_kvargs.a 00:26:45.897 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:26:45.897 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:26:45.897 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:26:45.897 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:26:45.897 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:26:45.897 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:26:45.897 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:26:45.897 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:26:45.897 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:26:45.897 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:26:46.157 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:26:46.157 [17/268] Linking static target lib/librte_telemetry.a 00:26:46.157 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:26:46.415 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:26:46.415 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:26:46.675 [21/268] Linking target lib/librte_log.so.24.1 00:26:46.675 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:26:46.675 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:26:46.675 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:26:46.675 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:26:46.675 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:26:46.675 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:26:46.675 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:26:46.936 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:26:46.936 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:26:46.936 [31/268] Linking target lib/librte_kvargs.so.24.1 00:26:46.936 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:26:46.936 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:26:47.195 [34/268] Linking target lib/librte_telemetry.so.24.1 00:26:47.195 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:26:47.195 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:26:47.195 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:26:47.195 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:26:47.195 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:26:47.195 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:26:47.195 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:26:47.455 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:26:47.455 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:26:47.455 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:26:47.455 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:26:47.714 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:26:47.714 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:26:47.714 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:26:47.714 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:26:47.975 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:26:47.975 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:26:47.975 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:26:47.975 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:26:47.975 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:26:47.975 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:26:48.234 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:26:48.234 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:26:48.234 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:26:48.234 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:26:48.493 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:26:48.493 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:26:48.493 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:26:48.493 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:26:48.493 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:26:48.493 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:26:48.493 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:26:48.752 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:26:49.012 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:26:49.012 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:26:49.012 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:26:49.012 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:26:49.387 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:26:49.387 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:26:49.387 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:26:49.387 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:26:49.387 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:26:49.387 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:26:49.387 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:26:49.387 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:26:49.690 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:26:49.690 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:26:49.690 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:26:49.950 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:26:49.950 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:26:49.950 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:26:49.950 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:26:49.950 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:26:49.950 [88/268] Linking static target lib/librte_ring.a 00:26:49.950 [89/268] Linking static target lib/librte_eal.a 00:26:50.208 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:26:50.208 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:26:50.208 [92/268] Linking static target lib/librte_mempool.a 00:26:50.208 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:26:50.467 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:26:50.467 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:26:50.467 [96/268] Linking static target lib/librte_rcu.a 00:26:50.467 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:26:50.467 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:26:50.467 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:26:50.467 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:26:50.726 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:26:50.726 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:26:50.726 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:26:50.985 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:26:50.985 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:26:50.985 [106/268] Linking static target lib/librte_net.a 00:26:50.985 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:26:50.985 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:26:50.985 [109/268] Linking static target lib/librte_meter.a 00:26:51.242 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:26:51.242 [111/268] Linking static target lib/librte_mbuf.a 00:26:51.242 [112/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:26:51.242 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:26:51.500 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:26:51.500 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:26:51.500 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:26:51.500 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:26:51.500 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:26:51.759 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:26:52.016 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:26:52.274 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:26:52.274 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:26:52.274 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:26:52.274 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:26:52.274 [125/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:26:52.274 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:26:52.532 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:26:52.532 [128/268] Linking static target lib/librte_pci.a 00:26:52.532 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:26:52.532 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:26:52.532 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:26:52.532 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:26:52.792 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:26:52.792 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:26:52.792 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:26:52.792 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:26:52.792 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:26:52.792 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:26:52.792 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:26:52.792 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:26:52.792 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:26:53.051 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:26:53.051 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:26:53.051 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:26:53.051 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:26:53.051 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:26:53.051 [147/268] Linking static target lib/librte_cmdline.a 00:26:53.311 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:26:53.311 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:26:53.570 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:26:53.570 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:26:53.570 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:26:53.570 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:26:53.570 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:26:53.570 [155/268] Linking static target lib/librte_timer.a 00:26:53.827 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:26:54.087 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:26:54.087 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:26:54.087 [159/268] Linking static target lib/librte_compressdev.a 00:26:54.087 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:26:54.087 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:26:54.087 [162/268] Linking static target lib/librte_ethdev.a 00:26:54.347 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:26:54.347 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:26:54.347 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:26:54.347 [166/268] Linking static target lib/librte_dmadev.a 00:26:54.347 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:26:54.606 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:26:54.606 [169/268] Linking static target lib/librte_hash.a 00:26:54.606 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:26:54.606 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:26:54.865 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:26:54.865 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:26:54.865 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:26:54.865 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:26:55.125 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:26:55.125 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:26:55.125 [178/268] Linking static target lib/librte_cryptodev.a 00:26:55.125 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:26:55.384 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:26:55.384 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:26:55.384 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:26:55.384 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:26:55.384 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:26:55.643 [185/268] Linking static target lib/librte_power.a 00:26:55.643 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:26:55.901 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:26:55.901 [188/268] Linking static target lib/librte_reorder.a 00:26:55.901 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:26:55.901 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:26:55.901 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:26:55.901 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:26:56.160 [193/268] Linking static target lib/librte_security.a 00:26:56.418 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:26:56.677 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:26:56.677 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:26:56.942 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:26:56.942 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:26:56.942 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:26:57.204 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:26:57.204 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:26:57.204 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:26:57.462 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:26:57.462 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:26:57.462 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:26:57.721 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:26:57.721 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:26:57.721 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:26:57.721 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:26:57.721 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:26:57.721 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:26:57.980 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:26:57.980 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:26:57.980 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:26:57.980 [215/268] Linking static target drivers/librte_bus_vdev.a 00:26:57.980 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:26:57.980 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:26:57.980 [218/268] Linking static target drivers/librte_bus_pci.a 00:26:57.980 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:26:58.239 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:26:58.239 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:26:58.239 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:26:58.499 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:26:58.499 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:26:58.499 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:26:58.499 [226/268] Linking static target drivers/librte_mempool_ring.a 00:26:58.499 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:26:59.480 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:27:02.763 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:27:02.763 [230/268] Linking target lib/librte_eal.so.24.1 00:27:03.020 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:27:03.020 [232/268] Linking target lib/librte_pci.so.24.1 00:27:03.020 [233/268] Linking target lib/librte_timer.so.24.1 00:27:03.020 [234/268] Linking target lib/librte_ring.so.24.1 00:27:03.020 [235/268] Linking target lib/librte_meter.so.24.1 00:27:03.020 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:27:03.020 [237/268] Linking target lib/librte_dmadev.so.24.1 00:27:03.020 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:27:03.279 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:27:03.279 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:27:03.279 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:27:03.279 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:27:03.279 [243/268] Linking target lib/librte_rcu.so.24.1 00:27:03.279 [244/268] Linking target lib/librte_mempool.so.24.1 00:27:03.279 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:27:03.279 [246/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:27:03.279 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:27:03.279 [248/268] Linking static target lib/librte_vhost.a 00:27:03.279 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:27:03.279 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:27:03.279 [251/268] Linking target lib/librte_mbuf.so.24.1 00:27:03.538 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:27:03.538 [253/268] Linking target lib/librte_compressdev.so.24.1 00:27:03.538 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:27:03.538 [255/268] Linking target lib/librte_reorder.so.24.1 00:27:03.538 [256/268] Linking target lib/librte_net.so.24.1 00:27:03.538 [257/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:03.797 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:27:03.797 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:27:03.797 [260/268] Linking target lib/librte_cmdline.so.24.1 00:27:03.797 [261/268] Linking target lib/librte_security.so.24.1 00:27:03.797 [262/268] Linking target lib/librte_hash.so.24.1 00:27:03.797 [263/268] Linking target lib/librte_ethdev.so.24.1 00:27:03.797 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:27:04.055 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:27:04.056 [266/268] Linking target lib/librte_power.so.24.1 00:27:05.967 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:27:05.967 [268/268] Linking target lib/librte_vhost.so.24.1 00:27:05.967 INFO: autodetecting backend as ninja 00:27:05.967 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:27:24.097 CC lib/log/log.o 00:27:24.097 CC lib/log/log_flags.o 00:27:24.097 CC lib/log/log_deprecated.o 00:27:24.097 CC lib/ut_mock/mock.o 00:27:24.097 CC lib/ut/ut.o 00:27:24.097 LIB libspdk_log.a 00:27:24.097 LIB libspdk_ut_mock.a 00:27:24.097 LIB libspdk_ut.a 00:27:24.097 SO libspdk_ut_mock.so.6.0 00:27:24.097 SO libspdk_log.so.7.1 00:27:24.097 SO libspdk_ut.so.2.0 00:27:24.097 SYMLINK libspdk_ut_mock.so 00:27:24.097 SYMLINK libspdk_log.so 00:27:24.097 SYMLINK libspdk_ut.so 00:27:24.097 CC lib/ioat/ioat.o 00:27:24.097 CC lib/util/base64.o 00:27:24.097 CC lib/util/bit_array.o 00:27:24.097 CXX lib/trace_parser/trace.o 00:27:24.097 CC lib/dma/dma.o 00:27:24.097 CC lib/util/crc16.o 00:27:24.097 CC lib/util/cpuset.o 00:27:24.097 CC lib/util/crc32.o 00:27:24.097 CC lib/util/crc32c.o 00:27:24.097 CC lib/vfio_user/host/vfio_user_pci.o 00:27:24.097 CC lib/vfio_user/host/vfio_user.o 00:27:24.097 CC lib/util/crc32_ieee.o 00:27:24.097 CC lib/util/crc64.o 00:27:24.097 CC lib/util/dif.o 00:27:24.097 CC lib/util/fd.o 00:27:24.097 CC lib/util/fd_group.o 00:27:24.097 LIB libspdk_dma.a 00:27:24.097 CC lib/util/file.o 00:27:24.097 CC lib/util/hexlify.o 00:27:24.097 SO libspdk_dma.so.5.0 00:27:24.097 LIB libspdk_ioat.a 00:27:24.097 CC lib/util/iov.o 00:27:24.097 SYMLINK libspdk_dma.so 00:27:24.097 CC lib/util/math.o 00:27:24.097 CC lib/util/net.o 00:27:24.097 SO libspdk_ioat.so.7.0 00:27:24.097 LIB libspdk_vfio_user.a 00:27:24.097 CC lib/util/pipe.o 00:27:24.097 SO libspdk_vfio_user.so.5.0 00:27:24.097 SYMLINK libspdk_ioat.so 00:27:24.097 CC lib/util/strerror_tls.o 00:27:24.097 CC lib/util/string.o 00:27:24.097 SYMLINK libspdk_vfio_user.so 00:27:24.097 CC lib/util/uuid.o 00:27:24.097 CC lib/util/xor.o 00:27:24.097 CC lib/util/zipf.o 00:27:24.097 CC lib/util/md5.o 00:27:24.097 LIB libspdk_util.a 00:27:24.356 LIB libspdk_trace_parser.a 00:27:24.356 SO libspdk_util.so.10.1 00:27:24.356 SO libspdk_trace_parser.so.6.0 00:27:24.356 SYMLINK libspdk_util.so 00:27:24.356 SYMLINK libspdk_trace_parser.so 00:27:24.623 CC lib/env_dpdk/env.o 00:27:24.623 CC lib/env_dpdk/pci.o 00:27:24.623 CC lib/env_dpdk/memory.o 00:27:24.623 CC lib/env_dpdk/init.o 00:27:24.623 CC lib/env_dpdk/threads.o 00:27:24.623 CC lib/vmd/vmd.o 00:27:24.623 CC lib/rdma_utils/rdma_utils.o 00:27:24.623 CC lib/idxd/idxd.o 00:27:24.623 CC lib/conf/conf.o 00:27:24.623 CC lib/json/json_parse.o 00:27:24.883 CC lib/env_dpdk/pci_ioat.o 00:27:24.883 CC lib/json/json_util.o 00:27:24.883 LIB libspdk_conf.a 00:27:24.883 CC lib/json/json_write.o 00:27:24.883 LIB libspdk_rdma_utils.a 00:27:24.883 SO libspdk_conf.so.6.0 00:27:24.883 SO libspdk_rdma_utils.so.1.0 00:27:24.883 SYMLINK libspdk_conf.so 00:27:24.883 SYMLINK libspdk_rdma_utils.so 00:27:25.140 CC lib/idxd/idxd_user.o 00:27:25.140 CC lib/idxd/idxd_kernel.o 00:27:25.140 CC lib/vmd/led.o 00:27:25.140 CC lib/env_dpdk/pci_virtio.o 00:27:25.140 CC lib/rdma_provider/common.o 00:27:25.140 CC lib/rdma_provider/rdma_provider_verbs.o 00:27:25.140 CC lib/env_dpdk/pci_vmd.o 00:27:25.140 LIB libspdk_json.a 00:27:25.140 CC lib/env_dpdk/pci_idxd.o 00:27:25.400 SO libspdk_json.so.6.0 00:27:25.400 CC lib/env_dpdk/pci_event.o 00:27:25.400 CC lib/env_dpdk/sigbus_handler.o 00:27:25.400 LIB libspdk_idxd.a 00:27:25.400 SYMLINK libspdk_json.so 00:27:25.400 CC lib/env_dpdk/pci_dpdk.o 00:27:25.400 CC lib/env_dpdk/pci_dpdk_2207.o 00:27:25.400 LIB libspdk_vmd.a 00:27:25.400 LIB libspdk_rdma_provider.a 00:27:25.400 SO libspdk_idxd.so.12.1 00:27:25.400 CC lib/env_dpdk/pci_dpdk_2211.o 00:27:25.400 SO libspdk_vmd.so.6.0 00:27:25.400 SO libspdk_rdma_provider.so.7.0 00:27:25.400 SYMLINK libspdk_vmd.so 00:27:25.400 SYMLINK libspdk_idxd.so 00:27:25.400 SYMLINK libspdk_rdma_provider.so 00:27:25.659 CC lib/jsonrpc/jsonrpc_server.o 00:27:25.659 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:27:25.659 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:27:25.659 CC lib/jsonrpc/jsonrpc_client.o 00:27:25.918 LIB libspdk_jsonrpc.a 00:27:25.918 SO libspdk_jsonrpc.so.6.0 00:27:25.918 SYMLINK libspdk_jsonrpc.so 00:27:26.176 LIB libspdk_env_dpdk.a 00:27:26.435 CC lib/rpc/rpc.o 00:27:26.435 SO libspdk_env_dpdk.so.15.1 00:27:26.435 SYMLINK libspdk_env_dpdk.so 00:27:26.694 LIB libspdk_rpc.a 00:27:26.694 SO libspdk_rpc.so.6.0 00:27:26.694 SYMLINK libspdk_rpc.so 00:27:27.261 CC lib/trace/trace.o 00:27:27.261 CC lib/trace/trace_flags.o 00:27:27.261 CC lib/trace/trace_rpc.o 00:27:27.261 CC lib/notify/notify.o 00:27:27.261 CC lib/keyring/keyring.o 00:27:27.261 CC lib/keyring/keyring_rpc.o 00:27:27.261 CC lib/notify/notify_rpc.o 00:27:27.261 LIB libspdk_notify.a 00:27:27.520 SO libspdk_notify.so.6.0 00:27:27.520 LIB libspdk_keyring.a 00:27:27.520 LIB libspdk_trace.a 00:27:27.520 SYMLINK libspdk_notify.so 00:27:27.520 SO libspdk_keyring.so.2.0 00:27:27.520 SO libspdk_trace.so.11.0 00:27:27.520 SYMLINK libspdk_keyring.so 00:27:27.520 SYMLINK libspdk_trace.so 00:27:28.087 CC lib/sock/sock.o 00:27:28.087 CC lib/sock/sock_rpc.o 00:27:28.087 CC lib/thread/thread.o 00:27:28.087 CC lib/thread/iobuf.o 00:27:28.346 LIB libspdk_sock.a 00:27:28.346 SO libspdk_sock.so.10.0 00:27:28.606 SYMLINK libspdk_sock.so 00:27:28.866 CC lib/nvme/nvme_ctrlr_cmd.o 00:27:28.866 CC lib/nvme/nvme_ctrlr.o 00:27:28.866 CC lib/nvme/nvme_fabric.o 00:27:28.866 CC lib/nvme/nvme_ns_cmd.o 00:27:28.866 CC lib/nvme/nvme_pcie.o 00:27:28.866 CC lib/nvme/nvme_ns.o 00:27:28.866 CC lib/nvme/nvme_pcie_common.o 00:27:28.866 CC lib/nvme/nvme.o 00:27:28.866 CC lib/nvme/nvme_qpair.o 00:27:29.803 CC lib/nvme/nvme_quirks.o 00:27:29.803 LIB libspdk_thread.a 00:27:29.803 CC lib/nvme/nvme_transport.o 00:27:29.803 CC lib/nvme/nvme_discovery.o 00:27:29.803 SO libspdk_thread.so.11.0 00:27:29.803 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:27:29.803 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:27:29.803 CC lib/nvme/nvme_tcp.o 00:27:29.803 SYMLINK libspdk_thread.so 00:27:29.803 CC lib/nvme/nvme_opal.o 00:27:29.803 CC lib/nvme/nvme_io_msg.o 00:27:30.078 CC lib/nvme/nvme_poll_group.o 00:27:30.375 CC lib/nvme/nvme_zns.o 00:27:30.375 CC lib/nvme/nvme_stubs.o 00:27:30.375 CC lib/nvme/nvme_auth.o 00:27:30.375 CC lib/accel/accel.o 00:27:30.375 CC lib/nvme/nvme_cuse.o 00:27:30.375 CC lib/accel/accel_rpc.o 00:27:30.375 CC lib/accel/accel_sw.o 00:27:30.634 CC lib/nvme/nvme_rdma.o 00:27:30.893 CC lib/blob/blobstore.o 00:27:30.893 CC lib/blob/request.o 00:27:30.893 CC lib/init/json_config.o 00:27:30.893 CC lib/virtio/virtio.o 00:27:31.152 CC lib/blob/zeroes.o 00:27:31.152 CC lib/init/subsystem.o 00:27:31.411 CC lib/virtio/virtio_vhost_user.o 00:27:31.411 CC lib/virtio/virtio_vfio_user.o 00:27:31.411 CC lib/blob/blob_bs_dev.o 00:27:31.411 CC lib/virtio/virtio_pci.o 00:27:31.411 CC lib/init/subsystem_rpc.o 00:27:31.411 CC lib/init/rpc.o 00:27:31.411 CC lib/fsdev/fsdev.o 00:27:31.670 CC lib/fsdev/fsdev_io.o 00:27:31.670 CC lib/fsdev/fsdev_rpc.o 00:27:31.670 LIB libspdk_init.a 00:27:31.670 SO libspdk_init.so.6.0 00:27:31.670 LIB libspdk_virtio.a 00:27:31.670 SYMLINK libspdk_init.so 00:27:31.670 LIB libspdk_accel.a 00:27:31.670 SO libspdk_virtio.so.7.0 00:27:31.929 SO libspdk_accel.so.16.0 00:27:31.929 SYMLINK libspdk_virtio.so 00:27:31.929 SYMLINK libspdk_accel.so 00:27:32.187 CC lib/event/reactor.o 00:27:32.187 CC lib/event/app.o 00:27:32.187 CC lib/event/log_rpc.o 00:27:32.187 CC lib/event/app_rpc.o 00:27:32.187 CC lib/event/scheduler_static.o 00:27:32.187 LIB libspdk_fsdev.a 00:27:32.187 CC lib/bdev/bdev.o 00:27:32.187 CC lib/bdev/bdev_rpc.o 00:27:32.187 SO libspdk_fsdev.so.2.0 00:27:32.187 CC lib/bdev/part.o 00:27:32.187 CC lib/bdev/bdev_zone.o 00:27:32.445 LIB libspdk_nvme.a 00:27:32.445 SYMLINK libspdk_fsdev.so 00:27:32.445 CC lib/bdev/scsi_nvme.o 00:27:32.445 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:27:32.445 SO libspdk_nvme.so.15.0 00:27:32.703 LIB libspdk_event.a 00:27:32.703 SO libspdk_event.so.14.0 00:27:32.703 SYMLINK libspdk_event.so 00:27:32.961 SYMLINK libspdk_nvme.so 00:27:33.219 LIB libspdk_fuse_dispatcher.a 00:27:33.477 SO libspdk_fuse_dispatcher.so.1.0 00:27:33.477 SYMLINK libspdk_fuse_dispatcher.so 00:27:34.852 LIB libspdk_blob.a 00:27:34.852 SO libspdk_blob.so.12.0 00:27:34.852 SYMLINK libspdk_blob.so 00:27:35.419 CC lib/lvol/lvol.o 00:27:35.419 CC lib/blobfs/tree.o 00:27:35.419 CC lib/blobfs/blobfs.o 00:27:35.419 LIB libspdk_bdev.a 00:27:35.419 SO libspdk_bdev.so.17.0 00:27:35.678 SYMLINK libspdk_bdev.so 00:27:35.937 CC lib/nvmf/ctrlr.o 00:27:35.937 CC lib/nvmf/subsystem.o 00:27:35.937 CC lib/nvmf/ctrlr_bdev.o 00:27:35.937 CC lib/nvmf/ctrlr_discovery.o 00:27:35.937 CC lib/nbd/nbd.o 00:27:35.937 CC lib/ublk/ublk.o 00:27:35.937 CC lib/scsi/dev.o 00:27:35.937 CC lib/ftl/ftl_core.o 00:27:36.196 CC lib/scsi/lun.o 00:27:36.196 LIB libspdk_blobfs.a 00:27:36.196 SO libspdk_blobfs.so.11.0 00:27:36.455 SYMLINK libspdk_blobfs.so 00:27:36.455 CC lib/scsi/port.o 00:27:36.455 CC lib/ftl/ftl_init.o 00:27:36.455 LIB libspdk_lvol.a 00:27:36.455 CC lib/nbd/nbd_rpc.o 00:27:36.455 SO libspdk_lvol.so.11.0 00:27:36.455 CC lib/ublk/ublk_rpc.o 00:27:36.455 CC lib/ftl/ftl_layout.o 00:27:36.455 SYMLINK libspdk_lvol.so 00:27:36.455 CC lib/ftl/ftl_debug.o 00:27:36.455 CC lib/scsi/scsi.o 00:27:36.713 CC lib/scsi/scsi_bdev.o 00:27:36.713 LIB libspdk_nbd.a 00:27:36.713 CC lib/scsi/scsi_pr.o 00:27:36.713 SO libspdk_nbd.so.7.0 00:27:36.713 LIB libspdk_ublk.a 00:27:36.713 CC lib/nvmf/nvmf.o 00:27:36.713 CC lib/nvmf/nvmf_rpc.o 00:27:36.713 SYMLINK libspdk_nbd.so 00:27:36.713 CC lib/nvmf/transport.o 00:27:36.714 SO libspdk_ublk.so.3.0 00:27:36.714 CC lib/nvmf/tcp.o 00:27:36.971 SYMLINK libspdk_ublk.so 00:27:36.971 CC lib/ftl/ftl_io.o 00:27:36.971 CC lib/nvmf/stubs.o 00:27:36.971 CC lib/scsi/scsi_rpc.o 00:27:37.230 CC lib/ftl/ftl_sb.o 00:27:37.230 CC lib/ftl/ftl_l2p.o 00:27:37.230 CC lib/scsi/task.o 00:27:37.489 CC lib/ftl/ftl_l2p_flat.o 00:27:37.489 CC lib/ftl/ftl_nv_cache.o 00:27:37.489 CC lib/nvmf/mdns_server.o 00:27:37.489 LIB libspdk_scsi.a 00:27:37.489 CC lib/ftl/ftl_band.o 00:27:37.489 SO libspdk_scsi.so.9.0 00:27:37.748 SYMLINK libspdk_scsi.so 00:27:37.748 CC lib/nvmf/rdma.o 00:27:37.748 CC lib/nvmf/auth.o 00:27:37.748 CC lib/ftl/ftl_band_ops.o 00:27:37.748 CC lib/ftl/ftl_writer.o 00:27:38.008 CC lib/ftl/ftl_rq.o 00:27:38.008 CC lib/ftl/ftl_reloc.o 00:27:38.008 CC lib/iscsi/conn.o 00:27:38.008 CC lib/iscsi/init_grp.o 00:27:38.008 CC lib/iscsi/iscsi.o 00:27:38.008 CC lib/iscsi/param.o 00:27:38.369 CC lib/ftl/ftl_l2p_cache.o 00:27:38.369 CC lib/ftl/ftl_p2l.o 00:27:38.369 CC lib/ftl/ftl_p2l_log.o 00:27:38.656 CC lib/ftl/mngt/ftl_mngt.o 00:27:38.656 CC lib/iscsi/portal_grp.o 00:27:38.656 CC lib/iscsi/tgt_node.o 00:27:38.656 CC lib/iscsi/iscsi_subsystem.o 00:27:38.656 CC lib/iscsi/iscsi_rpc.o 00:27:38.656 CC lib/iscsi/task.o 00:27:38.915 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:27:38.915 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:27:38.915 CC lib/ftl/mngt/ftl_mngt_startup.o 00:27:39.172 CC lib/ftl/mngt/ftl_mngt_md.o 00:27:39.172 CC lib/ftl/mngt/ftl_mngt_misc.o 00:27:39.172 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:27:39.172 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:27:39.172 CC lib/ftl/mngt/ftl_mngt_band.o 00:27:39.172 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:27:39.172 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:27:39.172 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:27:39.429 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:27:39.429 CC lib/ftl/utils/ftl_conf.o 00:27:39.429 CC lib/ftl/utils/ftl_md.o 00:27:39.429 CC lib/ftl/utils/ftl_mempool.o 00:27:39.429 CC lib/ftl/utils/ftl_bitmap.o 00:27:39.687 CC lib/vhost/vhost_rpc.o 00:27:39.687 CC lib/vhost/vhost.o 00:27:39.687 CC lib/vhost/vhost_scsi.o 00:27:39.687 CC lib/vhost/vhost_blk.o 00:27:39.687 CC lib/vhost/rte_vhost_user.o 00:27:39.687 CC lib/ftl/utils/ftl_property.o 00:27:39.945 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:27:39.945 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:27:39.945 LIB libspdk_iscsi.a 00:27:39.945 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:27:40.204 SO libspdk_iscsi.so.8.0 00:27:40.204 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:27:40.204 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:27:40.204 SYMLINK libspdk_iscsi.so 00:27:40.204 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:27:40.204 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:27:40.461 CC lib/ftl/upgrade/ftl_sb_v3.o 00:27:40.461 CC lib/ftl/upgrade/ftl_sb_v5.o 00:27:40.461 CC lib/ftl/nvc/ftl_nvc_dev.o 00:27:40.461 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:27:40.461 LIB libspdk_nvmf.a 00:27:40.461 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:27:40.720 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:27:40.720 CC lib/ftl/base/ftl_base_dev.o 00:27:40.720 SO libspdk_nvmf.so.20.0 00:27:40.720 CC lib/ftl/base/ftl_base_bdev.o 00:27:40.720 CC lib/ftl/ftl_trace.o 00:27:40.979 SYMLINK libspdk_nvmf.so 00:27:40.979 LIB libspdk_ftl.a 00:27:40.979 LIB libspdk_vhost.a 00:27:41.236 SO libspdk_vhost.so.8.0 00:27:41.237 SYMLINK libspdk_vhost.so 00:27:41.493 SO libspdk_ftl.so.9.0 00:27:41.750 SYMLINK libspdk_ftl.so 00:27:42.317 CC module/env_dpdk/env_dpdk_rpc.o 00:27:42.317 CC module/keyring/file/keyring.o 00:27:42.317 CC module/sock/posix/posix.o 00:27:42.317 CC module/blob/bdev/blob_bdev.o 00:27:42.317 CC module/scheduler/dynamic/scheduler_dynamic.o 00:27:42.317 CC module/accel/error/accel_error.o 00:27:42.317 CC module/accel/ioat/accel_ioat.o 00:27:42.317 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:27:42.317 CC module/keyring/linux/keyring.o 00:27:42.317 CC module/fsdev/aio/fsdev_aio.o 00:27:42.317 LIB libspdk_env_dpdk_rpc.a 00:27:42.317 SO libspdk_env_dpdk_rpc.so.6.0 00:27:42.317 SYMLINK libspdk_env_dpdk_rpc.so 00:27:42.317 CC module/fsdev/aio/fsdev_aio_rpc.o 00:27:42.317 CC module/keyring/file/keyring_rpc.o 00:27:42.317 CC module/keyring/linux/keyring_rpc.o 00:27:42.317 LIB libspdk_scheduler_dpdk_governor.a 00:27:42.317 CC module/accel/ioat/accel_ioat_rpc.o 00:27:42.577 SO libspdk_scheduler_dpdk_governor.so.4.0 00:27:42.577 CC module/accel/error/accel_error_rpc.o 00:27:42.577 LIB libspdk_scheduler_dynamic.a 00:27:42.577 SO libspdk_scheduler_dynamic.so.4.0 00:27:42.577 SYMLINK libspdk_scheduler_dpdk_governor.so 00:27:42.577 LIB libspdk_keyring_file.a 00:27:42.577 LIB libspdk_blob_bdev.a 00:27:42.577 CC module/fsdev/aio/linux_aio_mgr.o 00:27:42.577 LIB libspdk_keyring_linux.a 00:27:42.577 SO libspdk_blob_bdev.so.12.0 00:27:42.577 SO libspdk_keyring_file.so.2.0 00:27:42.577 SO libspdk_keyring_linux.so.1.0 00:27:42.577 SYMLINK libspdk_scheduler_dynamic.so 00:27:42.577 LIB libspdk_accel_error.a 00:27:42.577 LIB libspdk_accel_ioat.a 00:27:42.577 SYMLINK libspdk_blob_bdev.so 00:27:42.577 SO libspdk_accel_error.so.2.0 00:27:42.577 SO libspdk_accel_ioat.so.6.0 00:27:42.577 SYMLINK libspdk_keyring_file.so 00:27:42.577 SYMLINK libspdk_keyring_linux.so 00:27:42.835 SYMLINK libspdk_accel_error.so 00:27:42.835 SYMLINK libspdk_accel_ioat.so 00:27:42.835 CC module/accel/dsa/accel_dsa.o 00:27:42.835 CC module/accel/dsa/accel_dsa_rpc.o 00:27:42.835 CC module/scheduler/gscheduler/gscheduler.o 00:27:42.835 CC module/accel/iaa/accel_iaa.o 00:27:42.835 CC module/accel/iaa/accel_iaa_rpc.o 00:27:42.835 CC module/bdev/error/vbdev_error.o 00:27:42.835 CC module/bdev/delay/vbdev_delay.o 00:27:42.835 LIB libspdk_scheduler_gscheduler.a 00:27:42.835 CC module/bdev/gpt/gpt.o 00:27:42.835 CC module/blobfs/bdev/blobfs_bdev.o 00:27:43.093 SO libspdk_scheduler_gscheduler.so.4.0 00:27:43.093 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:27:43.093 SYMLINK libspdk_scheduler_gscheduler.so 00:27:43.093 LIB libspdk_accel_dsa.a 00:27:43.093 CC module/bdev/gpt/vbdev_gpt.o 00:27:43.093 LIB libspdk_accel_iaa.a 00:27:43.093 LIB libspdk_fsdev_aio.a 00:27:43.093 SO libspdk_accel_dsa.so.5.0 00:27:43.093 SO libspdk_accel_iaa.so.3.0 00:27:43.093 SO libspdk_fsdev_aio.so.1.0 00:27:43.093 LIB libspdk_sock_posix.a 00:27:43.093 CC module/bdev/delay/vbdev_delay_rpc.o 00:27:43.093 SYMLINK libspdk_accel_dsa.so 00:27:43.093 CC module/bdev/error/vbdev_error_rpc.o 00:27:43.093 SO libspdk_sock_posix.so.6.0 00:27:43.093 SYMLINK libspdk_accel_iaa.so 00:27:43.093 SYMLINK libspdk_fsdev_aio.so 00:27:43.351 LIB libspdk_blobfs_bdev.a 00:27:43.351 SYMLINK libspdk_sock_posix.so 00:27:43.351 SO libspdk_blobfs_bdev.so.6.0 00:27:43.351 LIB libspdk_bdev_error.a 00:27:43.351 LIB libspdk_bdev_delay.a 00:27:43.351 SYMLINK libspdk_blobfs_bdev.so 00:27:43.351 LIB libspdk_bdev_gpt.a 00:27:43.351 SO libspdk_bdev_error.so.6.0 00:27:43.351 CC module/bdev/malloc/bdev_malloc.o 00:27:43.351 SO libspdk_bdev_gpt.so.6.0 00:27:43.351 SO libspdk_bdev_delay.so.6.0 00:27:43.351 CC module/bdev/lvol/vbdev_lvol.o 00:27:43.351 CC module/bdev/nvme/bdev_nvme.o 00:27:43.351 CC module/bdev/null/bdev_null.o 00:27:43.607 SYMLINK libspdk_bdev_error.so 00:27:43.607 CC module/bdev/passthru/vbdev_passthru.o 00:27:43.607 SYMLINK libspdk_bdev_gpt.so 00:27:43.607 SYMLINK libspdk_bdev_delay.so 00:27:43.607 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:27:43.607 CC module/bdev/malloc/bdev_malloc_rpc.o 00:27:43.607 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:27:43.607 CC module/bdev/raid/bdev_raid.o 00:27:43.607 CC module/bdev/split/vbdev_split.o 00:27:43.607 CC module/bdev/split/vbdev_split_rpc.o 00:27:43.607 CC module/bdev/raid/bdev_raid_rpc.o 00:27:43.863 CC module/bdev/null/bdev_null_rpc.o 00:27:43.863 LIB libspdk_bdev_passthru.a 00:27:43.863 CC module/bdev/raid/bdev_raid_sb.o 00:27:43.863 SO libspdk_bdev_passthru.so.6.0 00:27:43.863 LIB libspdk_bdev_split.a 00:27:43.863 LIB libspdk_bdev_malloc.a 00:27:43.863 SO libspdk_bdev_split.so.6.0 00:27:43.864 SO libspdk_bdev_malloc.so.6.0 00:27:43.864 SYMLINK libspdk_bdev_passthru.so 00:27:43.864 CC module/bdev/nvme/bdev_nvme_rpc.o 00:27:43.864 LIB libspdk_bdev_null.a 00:27:43.864 SYMLINK libspdk_bdev_split.so 00:27:43.864 CC module/bdev/nvme/nvme_rpc.o 00:27:43.864 SO libspdk_bdev_null.so.6.0 00:27:43.864 SYMLINK libspdk_bdev_malloc.so 00:27:44.121 SYMLINK libspdk_bdev_null.so 00:27:44.121 CC module/bdev/nvme/bdev_mdns_client.o 00:27:44.121 LIB libspdk_bdev_lvol.a 00:27:44.121 CC module/bdev/zone_block/vbdev_zone_block.o 00:27:44.121 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:27:44.121 CC module/bdev/aio/bdev_aio.o 00:27:44.121 SO libspdk_bdev_lvol.so.6.0 00:27:44.121 CC module/bdev/ftl/bdev_ftl.o 00:27:44.121 CC module/bdev/aio/bdev_aio_rpc.o 00:27:44.121 CC module/bdev/nvme/vbdev_opal.o 00:27:44.121 SYMLINK libspdk_bdev_lvol.so 00:27:44.379 CC module/bdev/raid/raid0.o 00:27:44.379 CC module/bdev/iscsi/bdev_iscsi.o 00:27:44.379 CC module/bdev/virtio/bdev_virtio_scsi.o 00:27:44.639 LIB libspdk_bdev_zone_block.a 00:27:44.639 SO libspdk_bdev_zone_block.so.6.0 00:27:44.639 SYMLINK libspdk_bdev_zone_block.so 00:27:44.639 CC module/bdev/virtio/bdev_virtio_blk.o 00:27:44.639 CC module/bdev/virtio/bdev_virtio_rpc.o 00:27:44.639 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:27:44.898 LIB libspdk_bdev_aio.a 00:27:44.898 CC module/bdev/ftl/bdev_ftl_rpc.o 00:27:44.898 SO libspdk_bdev_aio.so.6.0 00:27:44.898 CC module/bdev/raid/raid1.o 00:27:44.898 CC module/bdev/raid/concat.o 00:27:44.898 CC module/bdev/raid/raid5f.o 00:27:44.898 CC module/bdev/nvme/vbdev_opal_rpc.o 00:27:44.898 LIB libspdk_bdev_iscsi.a 00:27:44.898 SYMLINK libspdk_bdev_aio.so 00:27:44.898 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:27:44.898 SO libspdk_bdev_iscsi.so.6.0 00:27:45.156 LIB libspdk_bdev_ftl.a 00:27:45.156 SO libspdk_bdev_ftl.so.6.0 00:27:45.156 SYMLINK libspdk_bdev_iscsi.so 00:27:45.156 SYMLINK libspdk_bdev_ftl.so 00:27:45.156 LIB libspdk_bdev_virtio.a 00:27:45.156 SO libspdk_bdev_virtio.so.6.0 00:27:45.413 SYMLINK libspdk_bdev_virtio.so 00:27:45.413 LIB libspdk_bdev_raid.a 00:27:45.672 SO libspdk_bdev_raid.so.6.0 00:27:45.672 SYMLINK libspdk_bdev_raid.so 00:27:46.606 LIB libspdk_bdev_nvme.a 00:27:46.606 SO libspdk_bdev_nvme.so.7.1 00:27:46.865 SYMLINK libspdk_bdev_nvme.so 00:27:47.435 CC module/event/subsystems/vmd/vmd_rpc.o 00:27:47.435 CC module/event/subsystems/iobuf/iobuf.o 00:27:47.435 CC module/event/subsystems/vmd/vmd.o 00:27:47.435 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:27:47.435 CC module/event/subsystems/keyring/keyring.o 00:27:47.435 CC module/event/subsystems/fsdev/fsdev.o 00:27:47.435 CC module/event/subsystems/sock/sock.o 00:27:47.435 CC module/event/subsystems/scheduler/scheduler.o 00:27:47.435 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:27:47.435 LIB libspdk_event_fsdev.a 00:27:47.435 LIB libspdk_event_keyring.a 00:27:47.435 LIB libspdk_event_sock.a 00:27:47.435 LIB libspdk_event_scheduler.a 00:27:47.435 LIB libspdk_event_vhost_blk.a 00:27:47.732 SO libspdk_event_keyring.so.1.0 00:27:47.732 SO libspdk_event_fsdev.so.1.0 00:27:47.732 SO libspdk_event_sock.so.5.0 00:27:47.732 SO libspdk_event_scheduler.so.4.0 00:27:47.732 SO libspdk_event_vhost_blk.so.3.0 00:27:47.732 LIB libspdk_event_vmd.a 00:27:47.732 LIB libspdk_event_iobuf.a 00:27:47.732 SO libspdk_event_vmd.so.6.0 00:27:47.732 SYMLINK libspdk_event_keyring.so 00:27:47.732 SYMLINK libspdk_event_scheduler.so 00:27:47.732 SYMLINK libspdk_event_sock.so 00:27:47.732 SYMLINK libspdk_event_vhost_blk.so 00:27:47.732 SO libspdk_event_iobuf.so.3.0 00:27:47.732 SYMLINK libspdk_event_fsdev.so 00:27:47.732 SYMLINK libspdk_event_vmd.so 00:27:47.732 SYMLINK libspdk_event_iobuf.so 00:27:47.990 CC module/event/subsystems/accel/accel.o 00:27:48.248 LIB libspdk_event_accel.a 00:27:48.248 SO libspdk_event_accel.so.6.0 00:27:48.248 SYMLINK libspdk_event_accel.so 00:27:48.813 CC module/event/subsystems/bdev/bdev.o 00:27:48.813 LIB libspdk_event_bdev.a 00:27:49.071 SO libspdk_event_bdev.so.6.0 00:27:49.071 SYMLINK libspdk_event_bdev.so 00:27:49.328 CC module/event/subsystems/nbd/nbd.o 00:27:49.328 CC module/event/subsystems/scsi/scsi.o 00:27:49.328 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:27:49.328 CC module/event/subsystems/ublk/ublk.o 00:27:49.328 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:27:49.587 LIB libspdk_event_nbd.a 00:27:49.587 LIB libspdk_event_scsi.a 00:27:49.587 LIB libspdk_event_ublk.a 00:27:49.587 SO libspdk_event_nbd.so.6.0 00:27:49.587 SO libspdk_event_scsi.so.6.0 00:27:49.587 SO libspdk_event_ublk.so.3.0 00:27:49.587 SYMLINK libspdk_event_nbd.so 00:27:49.587 LIB libspdk_event_nvmf.a 00:27:49.587 SYMLINK libspdk_event_ublk.so 00:27:49.587 SYMLINK libspdk_event_scsi.so 00:27:49.587 SO libspdk_event_nvmf.so.6.0 00:27:49.845 SYMLINK libspdk_event_nvmf.so 00:27:50.103 CC module/event/subsystems/iscsi/iscsi.o 00:27:50.103 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:27:50.103 LIB libspdk_event_iscsi.a 00:27:50.103 LIB libspdk_event_vhost_scsi.a 00:27:50.359 SO libspdk_event_iscsi.so.6.0 00:27:50.359 SO libspdk_event_vhost_scsi.so.3.0 00:27:50.359 SYMLINK libspdk_event_iscsi.so 00:27:50.359 SYMLINK libspdk_event_vhost_scsi.so 00:27:50.615 SO libspdk.so.6.0 00:27:50.615 SYMLINK libspdk.so 00:27:50.873 CXX app/trace/trace.o 00:27:50.874 CC app/spdk_lspci/spdk_lspci.o 00:27:50.874 CC app/trace_record/trace_record.o 00:27:50.874 CC examples/interrupt_tgt/interrupt_tgt.o 00:27:50.874 CC app/iscsi_tgt/iscsi_tgt.o 00:27:50.874 CC app/nvmf_tgt/nvmf_main.o 00:27:50.874 CC app/spdk_tgt/spdk_tgt.o 00:27:50.874 CC examples/util/zipf/zipf.o 00:27:50.874 CC examples/ioat/perf/perf.o 00:27:50.874 CC test/thread/poller_perf/poller_perf.o 00:27:51.132 LINK spdk_lspci 00:27:51.132 LINK interrupt_tgt 00:27:51.132 LINK nvmf_tgt 00:27:51.132 LINK zipf 00:27:51.132 LINK spdk_tgt 00:27:51.132 LINK iscsi_tgt 00:27:51.132 LINK poller_perf 00:27:51.132 LINK spdk_trace_record 00:27:51.132 LINK ioat_perf 00:27:51.391 CC app/spdk_nvme_perf/perf.o 00:27:51.391 LINK spdk_trace 00:27:51.391 CC app/spdk_nvme_identify/identify.o 00:27:51.391 CC app/spdk_nvme_discover/discovery_aer.o 00:27:51.391 TEST_HEADER include/spdk/accel.h 00:27:51.391 TEST_HEADER include/spdk/accel_module.h 00:27:51.391 TEST_HEADER include/spdk/assert.h 00:27:51.391 CC app/spdk_top/spdk_top.o 00:27:51.391 TEST_HEADER include/spdk/barrier.h 00:27:51.391 TEST_HEADER include/spdk/base64.h 00:27:51.391 TEST_HEADER include/spdk/bdev.h 00:27:51.391 TEST_HEADER include/spdk/bdev_module.h 00:27:51.391 TEST_HEADER include/spdk/bdev_zone.h 00:27:51.391 TEST_HEADER include/spdk/bit_array.h 00:27:51.391 TEST_HEADER include/spdk/bit_pool.h 00:27:51.391 TEST_HEADER include/spdk/blob_bdev.h 00:27:51.391 TEST_HEADER include/spdk/blobfs_bdev.h 00:27:51.391 TEST_HEADER include/spdk/blobfs.h 00:27:51.391 TEST_HEADER include/spdk/blob.h 00:27:51.391 TEST_HEADER include/spdk/conf.h 00:27:51.391 TEST_HEADER include/spdk/config.h 00:27:51.391 TEST_HEADER include/spdk/cpuset.h 00:27:51.391 TEST_HEADER include/spdk/crc16.h 00:27:51.391 CC examples/ioat/verify/verify.o 00:27:51.391 TEST_HEADER include/spdk/crc32.h 00:27:51.391 TEST_HEADER include/spdk/crc64.h 00:27:51.391 TEST_HEADER include/spdk/dif.h 00:27:51.649 TEST_HEADER include/spdk/dma.h 00:27:51.649 TEST_HEADER include/spdk/endian.h 00:27:51.649 TEST_HEADER include/spdk/env_dpdk.h 00:27:51.649 TEST_HEADER include/spdk/env.h 00:27:51.649 TEST_HEADER include/spdk/event.h 00:27:51.649 TEST_HEADER include/spdk/fd_group.h 00:27:51.649 TEST_HEADER include/spdk/fd.h 00:27:51.649 TEST_HEADER include/spdk/file.h 00:27:51.649 TEST_HEADER include/spdk/fsdev.h 00:27:51.649 TEST_HEADER include/spdk/fsdev_module.h 00:27:51.649 TEST_HEADER include/spdk/ftl.h 00:27:51.649 TEST_HEADER include/spdk/gpt_spec.h 00:27:51.649 TEST_HEADER include/spdk/hexlify.h 00:27:51.649 TEST_HEADER include/spdk/histogram_data.h 00:27:51.649 TEST_HEADER include/spdk/idxd.h 00:27:51.649 TEST_HEADER include/spdk/idxd_spec.h 00:27:51.649 TEST_HEADER include/spdk/init.h 00:27:51.649 TEST_HEADER include/spdk/ioat.h 00:27:51.649 TEST_HEADER include/spdk/ioat_spec.h 00:27:51.649 TEST_HEADER include/spdk/iscsi_spec.h 00:27:51.649 TEST_HEADER include/spdk/json.h 00:27:51.649 CC test/app/bdev_svc/bdev_svc.o 00:27:51.649 TEST_HEADER include/spdk/jsonrpc.h 00:27:51.649 TEST_HEADER include/spdk/keyring.h 00:27:51.649 TEST_HEADER include/spdk/keyring_module.h 00:27:51.649 TEST_HEADER include/spdk/likely.h 00:27:51.649 TEST_HEADER include/spdk/log.h 00:27:51.649 TEST_HEADER include/spdk/lvol.h 00:27:51.649 TEST_HEADER include/spdk/md5.h 00:27:51.649 TEST_HEADER include/spdk/memory.h 00:27:51.649 CC test/dma/test_dma/test_dma.o 00:27:51.649 TEST_HEADER include/spdk/mmio.h 00:27:51.649 TEST_HEADER include/spdk/nbd.h 00:27:51.649 TEST_HEADER include/spdk/net.h 00:27:51.649 TEST_HEADER include/spdk/notify.h 00:27:51.649 TEST_HEADER include/spdk/nvme.h 00:27:51.649 TEST_HEADER include/spdk/nvme_intel.h 00:27:51.649 TEST_HEADER include/spdk/nvme_ocssd.h 00:27:51.649 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:27:51.649 CC test/env/vtophys/vtophys.o 00:27:51.649 TEST_HEADER include/spdk/nvme_spec.h 00:27:51.649 TEST_HEADER include/spdk/nvme_zns.h 00:27:51.649 TEST_HEADER include/spdk/nvmf_cmd.h 00:27:51.649 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:27:51.649 TEST_HEADER include/spdk/nvmf.h 00:27:51.649 TEST_HEADER include/spdk/nvmf_spec.h 00:27:51.649 TEST_HEADER include/spdk/nvmf_transport.h 00:27:51.649 TEST_HEADER include/spdk/opal.h 00:27:51.649 TEST_HEADER include/spdk/opal_spec.h 00:27:51.649 TEST_HEADER include/spdk/pci_ids.h 00:27:51.649 TEST_HEADER include/spdk/pipe.h 00:27:51.649 TEST_HEADER include/spdk/queue.h 00:27:51.649 TEST_HEADER include/spdk/reduce.h 00:27:51.649 LINK spdk_nvme_discover 00:27:51.649 TEST_HEADER include/spdk/rpc.h 00:27:51.649 TEST_HEADER include/spdk/scheduler.h 00:27:51.649 TEST_HEADER include/spdk/scsi.h 00:27:51.649 TEST_HEADER include/spdk/scsi_spec.h 00:27:51.649 TEST_HEADER include/spdk/sock.h 00:27:51.649 TEST_HEADER include/spdk/stdinc.h 00:27:51.649 TEST_HEADER include/spdk/string.h 00:27:51.649 TEST_HEADER include/spdk/thread.h 00:27:51.649 TEST_HEADER include/spdk/trace.h 00:27:51.649 TEST_HEADER include/spdk/trace_parser.h 00:27:51.649 TEST_HEADER include/spdk/tree.h 00:27:51.649 TEST_HEADER include/spdk/ublk.h 00:27:51.649 TEST_HEADER include/spdk/util.h 00:27:51.649 CC test/env/mem_callbacks/mem_callbacks.o 00:27:51.649 TEST_HEADER include/spdk/uuid.h 00:27:51.649 TEST_HEADER include/spdk/version.h 00:27:51.649 TEST_HEADER include/spdk/vfio_user_pci.h 00:27:51.649 TEST_HEADER include/spdk/vfio_user_spec.h 00:27:51.649 TEST_HEADER include/spdk/vhost.h 00:27:51.649 TEST_HEADER include/spdk/vmd.h 00:27:51.649 TEST_HEADER include/spdk/xor.h 00:27:51.649 TEST_HEADER include/spdk/zipf.h 00:27:51.649 CXX test/cpp_headers/accel.o 00:27:51.649 LINK bdev_svc 00:27:51.649 LINK verify 00:27:51.908 LINK vtophys 00:27:51.908 CXX test/cpp_headers/accel_module.o 00:27:51.908 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:27:52.166 CC test/app/histogram_perf/histogram_perf.o 00:27:52.166 LINK env_dpdk_post_init 00:27:52.166 CXX test/cpp_headers/assert.o 00:27:52.166 LINK test_dma 00:27:52.166 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:27:52.166 CC examples/thread/thread/thread_ex.o 00:27:52.166 LINK mem_callbacks 00:27:52.423 LINK histogram_perf 00:27:52.423 CXX test/cpp_headers/barrier.o 00:27:52.423 CXX test/cpp_headers/base64.o 00:27:52.423 LINK thread 00:27:52.423 LINK spdk_nvme_identify 00:27:52.681 LINK spdk_top 00:27:52.681 CC test/app/jsoncat/jsoncat.o 00:27:52.681 CC test/env/memory/memory_ut.o 00:27:52.681 CXX test/cpp_headers/bdev.o 00:27:52.681 CC app/spdk_dd/spdk_dd.o 00:27:52.681 CXX test/cpp_headers/bdev_module.o 00:27:52.681 LINK spdk_nvme_perf 00:27:52.681 LINK nvme_fuzz 00:27:52.681 LINK jsoncat 00:27:52.681 CC app/fio/nvme/fio_plugin.o 00:27:52.681 CXX test/cpp_headers/bdev_zone.o 00:27:52.945 CC examples/sock/hello_world/hello_sock.o 00:27:52.945 CXX test/cpp_headers/bit_array.o 00:27:52.945 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:27:52.945 CC app/vhost/vhost.o 00:27:52.945 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:27:52.945 CXX test/cpp_headers/bit_pool.o 00:27:53.220 CC test/env/pci/pci_ut.o 00:27:53.220 LINK spdk_dd 00:27:53.220 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:27:53.220 CXX test/cpp_headers/blob_bdev.o 00:27:53.220 LINK hello_sock 00:27:53.220 LINK vhost 00:27:53.220 CC test/event/event_perf/event_perf.o 00:27:53.479 CXX test/cpp_headers/blobfs_bdev.o 00:27:53.479 CC test/event/reactor/reactor.o 00:27:53.479 LINK event_perf 00:27:53.737 LINK reactor 00:27:53.737 CXX test/cpp_headers/blobfs.o 00:27:53.737 CC test/event/reactor_perf/reactor_perf.o 00:27:53.737 CC examples/vmd/lsvmd/lsvmd.o 00:27:53.737 LINK pci_ut 00:27:53.737 LINK vhost_fuzz 00:27:53.737 LINK spdk_nvme 00:27:53.737 LINK reactor_perf 00:27:53.737 CC examples/vmd/led/led.o 00:27:53.737 LINK lsvmd 00:27:53.737 CXX test/cpp_headers/blob.o 00:27:53.995 CC app/fio/bdev/fio_plugin.o 00:27:53.995 LINK led 00:27:53.995 LINK memory_ut 00:27:53.995 CXX test/cpp_headers/conf.o 00:27:53.996 CC test/nvme/aer/aer.o 00:27:54.254 CC test/nvme/reset/reset.o 00:27:54.254 CC test/event/app_repeat/app_repeat.o 00:27:54.254 CC test/nvme/sgl/sgl.o 00:27:54.254 CC test/nvme/e2edp/nvme_dp.o 00:27:54.254 CXX test/cpp_headers/config.o 00:27:54.254 CXX test/cpp_headers/cpuset.o 00:27:54.254 CC test/rpc_client/rpc_client_test.o 00:27:54.512 LINK aer 00:27:54.512 LINK app_repeat 00:27:54.512 CC examples/idxd/perf/perf.o 00:27:54.512 LINK reset 00:27:54.512 CXX test/cpp_headers/crc16.o 00:27:54.512 LINK nvme_dp 00:27:54.512 LINK spdk_bdev 00:27:54.512 LINK rpc_client_test 00:27:54.769 CXX test/cpp_headers/crc32.o 00:27:54.769 LINK sgl 00:27:54.769 CXX test/cpp_headers/crc64.o 00:27:54.769 CC test/nvme/overhead/overhead.o 00:27:54.769 CC test/accel/dif/dif.o 00:27:54.769 CC test/nvme/err_injection/err_injection.o 00:27:54.769 CC test/event/scheduler/scheduler.o 00:27:55.027 CC test/blobfs/mkfs/mkfs.o 00:27:55.027 LINK idxd_perf 00:27:55.027 CXX test/cpp_headers/dif.o 00:27:55.027 CC test/nvme/startup/startup.o 00:27:55.027 LINK err_injection 00:27:55.285 LINK mkfs 00:27:55.285 LINK iscsi_fuzz 00:27:55.285 LINK overhead 00:27:55.285 CC test/lvol/esnap/esnap.o 00:27:55.285 LINK scheduler 00:27:55.285 LINK startup 00:27:55.285 CXX test/cpp_headers/dma.o 00:27:55.285 CC examples/fsdev/hello_world/hello_fsdev.o 00:27:55.285 CC test/nvme/reserve/reserve.o 00:27:55.543 CC test/nvme/simple_copy/simple_copy.o 00:27:55.543 CC test/nvme/connect_stress/connect_stress.o 00:27:55.543 CC test/nvme/boot_partition/boot_partition.o 00:27:55.543 CC test/app/stub/stub.o 00:27:55.543 CXX test/cpp_headers/endian.o 00:27:55.800 LINK dif 00:27:55.800 LINK reserve 00:27:55.800 CC test/nvme/compliance/nvme_compliance.o 00:27:55.800 LINK hello_fsdev 00:27:55.800 LINK connect_stress 00:27:55.800 LINK boot_partition 00:27:55.800 LINK stub 00:27:55.800 LINK simple_copy 00:27:56.058 CXX test/cpp_headers/env_dpdk.o 00:27:56.058 CC test/nvme/fused_ordering/fused_ordering.o 00:27:56.058 CXX test/cpp_headers/env.o 00:27:56.058 CC test/nvme/doorbell_aers/doorbell_aers.o 00:27:56.058 CC test/nvme/fdp/fdp.o 00:27:56.058 CXX test/cpp_headers/event.o 00:27:56.058 CC examples/accel/perf/accel_perf.o 00:27:56.058 LINK nvme_compliance 00:27:56.317 LINK fused_ordering 00:27:56.317 CXX test/cpp_headers/fd_group.o 00:27:56.317 CC examples/blob/hello_world/hello_blob.o 00:27:56.317 LINK doorbell_aers 00:27:56.576 CC examples/nvme/hello_world/hello_world.o 00:27:56.576 CC examples/nvme/reconnect/reconnect.o 00:27:56.576 CXX test/cpp_headers/fd.o 00:27:56.576 LINK fdp 00:27:56.576 CC test/bdev/bdevio/bdevio.o 00:27:56.576 CC test/nvme/cuse/cuse.o 00:27:56.576 LINK hello_blob 00:27:56.576 CC examples/nvme/nvme_manage/nvme_manage.o 00:27:56.576 CXX test/cpp_headers/file.o 00:27:56.834 CXX test/cpp_headers/fsdev.o 00:27:56.834 LINK hello_world 00:27:56.834 LINK reconnect 00:27:56.834 CXX test/cpp_headers/fsdev_module.o 00:27:57.093 CC examples/blob/cli/blobcli.o 00:27:57.093 CC examples/nvme/arbitration/arbitration.o 00:27:57.093 LINK bdevio 00:27:57.093 LINK accel_perf 00:27:57.093 CXX test/cpp_headers/ftl.o 00:27:57.093 CC examples/nvme/hotplug/hotplug.o 00:27:57.093 CC examples/nvme/cmb_copy/cmb_copy.o 00:27:57.352 CC examples/nvme/abort/abort.o 00:27:57.352 CXX test/cpp_headers/gpt_spec.o 00:27:57.352 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:27:57.352 LINK nvme_manage 00:27:57.352 LINK arbitration 00:27:57.352 LINK cmb_copy 00:27:57.352 LINK hotplug 00:27:57.610 CXX test/cpp_headers/hexlify.o 00:27:57.610 CXX test/cpp_headers/histogram_data.o 00:27:57.610 LINK blobcli 00:27:57.610 LINK pmr_persistence 00:27:57.610 CXX test/cpp_headers/idxd.o 00:27:57.610 CXX test/cpp_headers/idxd_spec.o 00:27:57.610 CXX test/cpp_headers/init.o 00:27:57.610 CXX test/cpp_headers/ioat.o 00:27:57.868 CXX test/cpp_headers/ioat_spec.o 00:27:57.868 LINK abort 00:27:57.868 CXX test/cpp_headers/iscsi_spec.o 00:27:57.868 CXX test/cpp_headers/json.o 00:27:57.868 CXX test/cpp_headers/jsonrpc.o 00:27:57.868 CXX test/cpp_headers/keyring.o 00:27:57.868 CXX test/cpp_headers/keyring_module.o 00:27:57.868 CXX test/cpp_headers/likely.o 00:27:58.125 CXX test/cpp_headers/log.o 00:27:58.125 CXX test/cpp_headers/lvol.o 00:27:58.125 CC examples/bdev/hello_world/hello_bdev.o 00:27:58.125 CC examples/bdev/bdevperf/bdevperf.o 00:27:58.125 CXX test/cpp_headers/md5.o 00:27:58.125 LINK cuse 00:27:58.125 CXX test/cpp_headers/memory.o 00:27:58.125 CXX test/cpp_headers/mmio.o 00:27:58.125 CXX test/cpp_headers/nbd.o 00:27:58.125 CXX test/cpp_headers/net.o 00:27:58.125 CXX test/cpp_headers/notify.o 00:27:58.125 CXX test/cpp_headers/nvme.o 00:27:58.383 CXX test/cpp_headers/nvme_intel.o 00:27:58.383 CXX test/cpp_headers/nvme_ocssd.o 00:27:58.383 CXX test/cpp_headers/nvme_ocssd_spec.o 00:27:58.383 CXX test/cpp_headers/nvme_spec.o 00:27:58.383 CXX test/cpp_headers/nvme_zns.o 00:27:58.383 CXX test/cpp_headers/nvmf_cmd.o 00:27:58.383 LINK hello_bdev 00:27:58.641 CXX test/cpp_headers/nvmf_fc_spec.o 00:27:58.641 CXX test/cpp_headers/nvmf.o 00:27:58.641 CXX test/cpp_headers/nvmf_spec.o 00:27:58.641 CXX test/cpp_headers/nvmf_transport.o 00:27:58.641 CXX test/cpp_headers/opal.o 00:27:58.641 CXX test/cpp_headers/opal_spec.o 00:27:58.641 CXX test/cpp_headers/pci_ids.o 00:27:58.641 CXX test/cpp_headers/pipe.o 00:27:58.641 CXX test/cpp_headers/queue.o 00:27:58.897 CXX test/cpp_headers/reduce.o 00:27:58.897 CXX test/cpp_headers/rpc.o 00:27:58.897 CXX test/cpp_headers/scheduler.o 00:27:58.897 CXX test/cpp_headers/scsi.o 00:27:58.897 CXX test/cpp_headers/scsi_spec.o 00:27:58.897 CXX test/cpp_headers/sock.o 00:27:58.897 CXX test/cpp_headers/stdinc.o 00:27:58.897 CXX test/cpp_headers/string.o 00:27:58.897 CXX test/cpp_headers/thread.o 00:27:58.897 CXX test/cpp_headers/trace.o 00:27:58.897 CXX test/cpp_headers/trace_parser.o 00:27:58.897 CXX test/cpp_headers/tree.o 00:27:58.897 CXX test/cpp_headers/ublk.o 00:27:59.154 LINK bdevperf 00:27:59.154 CXX test/cpp_headers/util.o 00:27:59.154 CXX test/cpp_headers/uuid.o 00:27:59.154 CXX test/cpp_headers/version.o 00:27:59.154 CXX test/cpp_headers/vfio_user_pci.o 00:27:59.154 CXX test/cpp_headers/vfio_user_spec.o 00:27:59.154 CXX test/cpp_headers/vhost.o 00:27:59.154 CXX test/cpp_headers/vmd.o 00:27:59.154 CXX test/cpp_headers/xor.o 00:27:59.154 CXX test/cpp_headers/zipf.o 00:27:59.720 CC examples/nvmf/nvmf/nvmf.o 00:27:59.979 LINK nvmf 00:28:03.266 LINK esnap 00:28:03.524 00:28:03.524 real 1m29.731s 00:28:03.524 user 7m51.371s 00:28:03.524 sys 1m53.584s 00:28:03.524 23:09:43 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:28:03.524 23:09:43 make -- common/autotest_common.sh@10 -- $ set +x 00:28:03.524 ************************************ 00:28:03.524 END TEST make 00:28:03.524 ************************************ 00:28:03.524 23:09:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:28:03.524 23:09:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:03.524 23:09:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:03.524 23:09:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:03.524 23:09:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:03.524 23:09:44 -- pm/common@44 -- $ pid=5247 00:28:03.524 23:09:44 -- pm/common@50 -- $ kill -TERM 5247 00:28:03.524 23:09:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:03.524 23:09:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:03.524 23:09:44 -- pm/common@44 -- $ pid=5249 00:28:03.524 23:09:44 -- pm/common@50 -- $ kill -TERM 5249 00:28:03.524 23:09:44 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:28:03.524 23:09:44 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:28:03.524 23:09:44 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:03.524 23:09:44 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:03.782 23:09:44 -- common/autotest_common.sh@1711 -- # lcov --version 00:28:03.782 23:09:44 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:03.782 23:09:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.782 23:09:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.782 23:09:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.782 23:09:44 -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.783 23:09:44 -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.783 23:09:44 -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.783 23:09:44 -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.783 23:09:44 -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.783 23:09:44 -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.783 23:09:44 -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.783 23:09:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.783 23:09:44 -- scripts/common.sh@344 -- # case "$op" in 00:28:03.783 23:09:44 -- scripts/common.sh@345 -- # : 1 00:28:03.783 23:09:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.783 23:09:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.783 23:09:44 -- scripts/common.sh@365 -- # decimal 1 00:28:03.783 23:09:44 -- scripts/common.sh@353 -- # local d=1 00:28:03.783 23:09:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.783 23:09:44 -- scripts/common.sh@355 -- # echo 1 00:28:03.783 23:09:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.783 23:09:44 -- scripts/common.sh@366 -- # decimal 2 00:28:03.783 23:09:44 -- scripts/common.sh@353 -- # local d=2 00:28:03.783 23:09:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.783 23:09:44 -- scripts/common.sh@355 -- # echo 2 00:28:03.783 23:09:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.783 23:09:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.783 23:09:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.783 23:09:44 -- scripts/common.sh@368 -- # return 0 00:28:03.783 23:09:44 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.783 23:09:44 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:03.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.783 --rc genhtml_branch_coverage=1 00:28:03.783 --rc genhtml_function_coverage=1 00:28:03.783 --rc genhtml_legend=1 00:28:03.783 --rc geninfo_all_blocks=1 00:28:03.783 --rc geninfo_unexecuted_blocks=1 00:28:03.783 00:28:03.783 ' 00:28:03.783 23:09:44 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:03.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.783 --rc genhtml_branch_coverage=1 00:28:03.783 --rc genhtml_function_coverage=1 00:28:03.783 --rc genhtml_legend=1 00:28:03.783 --rc geninfo_all_blocks=1 00:28:03.783 --rc geninfo_unexecuted_blocks=1 00:28:03.783 00:28:03.783 ' 00:28:03.783 23:09:44 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:03.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.783 --rc genhtml_branch_coverage=1 00:28:03.783 --rc genhtml_function_coverage=1 00:28:03.783 --rc genhtml_legend=1 00:28:03.783 --rc geninfo_all_blocks=1 00:28:03.783 --rc geninfo_unexecuted_blocks=1 00:28:03.783 00:28:03.783 ' 00:28:03.783 23:09:44 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:03.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.783 --rc genhtml_branch_coverage=1 00:28:03.783 --rc genhtml_function_coverage=1 00:28:03.783 --rc genhtml_legend=1 00:28:03.783 --rc geninfo_all_blocks=1 00:28:03.783 --rc geninfo_unexecuted_blocks=1 00:28:03.783 00:28:03.783 ' 00:28:03.783 23:09:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:03.783 23:09:44 -- nvmf/common.sh@7 -- # uname -s 00:28:03.783 23:09:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.783 23:09:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.783 23:09:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.783 23:09:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.783 23:09:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.783 23:09:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.783 23:09:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.783 23:09:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.783 23:09:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.783 23:09:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.783 23:09:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:315856d4-f6fe-49fc-aa4d-9adba20f06a2 00:28:03.783 23:09:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=315856d4-f6fe-49fc-aa4d-9adba20f06a2 00:28:03.783 23:09:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.783 23:09:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.783 23:09:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:03.783 23:09:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.783 23:09:44 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:03.783 23:09:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.783 23:09:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.783 23:09:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.783 23:09:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.783 23:09:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.783 23:09:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.783 23:09:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.783 23:09:44 -- paths/export.sh@5 -- # export PATH 00:28:03.783 23:09:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.783 23:09:44 -- nvmf/common.sh@51 -- # : 0 00:28:03.783 23:09:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.783 23:09:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.783 23:09:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.783 23:09:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.783 23:09:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.783 23:09:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.783 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.783 23:09:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.783 23:09:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.783 23:09:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.783 23:09:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:28:03.783 23:09:44 -- spdk/autotest.sh@32 -- # uname -s 00:28:03.783 23:09:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:28:03.783 23:09:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:28:03.783 23:09:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:28:03.783 23:09:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:28:03.783 23:09:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:28:03.783 23:09:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:28:03.783 23:09:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:28:03.783 23:09:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:28:03.783 23:09:44 -- spdk/autotest.sh@48 -- # udevadm_pid=54273 00:28:03.783 23:09:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:28:03.783 23:09:44 -- pm/common@17 -- # local monitor 00:28:03.783 23:09:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:28:03.783 23:09:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:28:03.783 23:09:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:28:03.783 23:09:44 -- pm/common@25 -- # sleep 1 00:28:03.783 23:09:44 -- pm/common@21 -- # date +%s 00:28:03.783 23:09:44 -- pm/common@21 -- # date +%s 00:28:03.783 23:09:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733785784 00:28:03.783 23:09:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733785784 00:28:03.783 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733785784_collect-vmstat.pm.log 00:28:03.783 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733785784_collect-cpu-load.pm.log 00:28:05.159 23:09:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:28:05.159 23:09:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:28:05.159 23:09:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.159 23:09:45 -- common/autotest_common.sh@10 -- # set +x 00:28:05.159 23:09:45 -- spdk/autotest.sh@59 -- # create_test_list 00:28:05.159 23:09:45 -- common/autotest_common.sh@752 -- # xtrace_disable 00:28:05.159 23:09:45 -- common/autotest_common.sh@10 -- # set +x 00:28:05.159 23:09:45 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:28:05.159 23:09:45 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:28:05.159 23:09:45 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:28:05.159 23:09:45 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:28:05.159 23:09:45 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:28:05.159 23:09:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:28:05.159 23:09:45 -- common/autotest_common.sh@1457 -- # uname 00:28:05.159 23:09:45 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:28:05.159 23:09:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:28:05.159 23:09:45 -- common/autotest_common.sh@1477 -- # uname 00:28:05.159 23:09:45 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:28:05.159 23:09:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:28:05.159 23:09:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:28:05.159 lcov: LCOV version 1.15 00:28:05.159 23:09:45 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:28:23.251 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:28:23.251 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:28:38.141 23:10:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:28:38.141 23:10:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.141 23:10:17 -- common/autotest_common.sh@10 -- # set +x 00:28:38.141 23:10:17 -- spdk/autotest.sh@78 -- # rm -f 00:28:38.141 23:10:17 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:38.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:38.141 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:38.141 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:38.141 23:10:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:28:38.141 23:10:18 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:28:38.141 23:10:18 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:28:38.141 23:10:18 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:28:38.141 23:10:18 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:28:38.141 23:10:18 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:28:38.141 23:10:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:28:38.141 23:10:18 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:28:38.141 23:10:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:28:38.141 23:10:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:28:38.141 23:10:18 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:38.141 23:10:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:38.141 23:10:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:38.141 23:10:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:28:38.141 23:10:18 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:28:38.141 23:10:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:28:38.141 23:10:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:28:38.141 23:10:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:38.141 23:10:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:38.141 23:10:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:38.141 23:10:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:28:38.141 23:10:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:28:38.141 23:10:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:28:38.141 23:10:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:28:38.141 23:10:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:38.141 23:10:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:28:38.141 23:10:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:28:38.141 23:10:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:28:38.141 23:10:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:28:38.141 23:10:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:38.141 23:10:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:28:38.141 23:10:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:28:38.141 23:10:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:28:38.141 23:10:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:28:38.141 23:10:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:28:38.141 23:10:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:28:38.141 No valid GPT data, bailing 00:28:38.141 23:10:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:38.141 23:10:18 -- scripts/common.sh@394 -- # pt= 00:28:38.141 23:10:18 -- scripts/common.sh@395 -- # return 1 00:28:38.141 23:10:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:28:38.141 1+0 records in 00:28:38.141 1+0 records out 00:28:38.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433709 s, 242 MB/s 00:28:38.141 23:10:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:28:38.141 23:10:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:28:38.141 23:10:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:28:38.141 23:10:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:28:38.141 23:10:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:28:38.141 No valid GPT data, bailing 00:28:38.141 23:10:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:38.141 23:10:18 -- scripts/common.sh@394 -- # pt= 00:28:38.141 23:10:18 -- scripts/common.sh@395 -- # return 1 00:28:38.141 23:10:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:28:38.141 1+0 records in 00:28:38.141 1+0 records out 00:28:38.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397967 s, 263 MB/s 00:28:38.141 23:10:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:28:38.141 23:10:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:28:38.141 23:10:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:28:38.141 23:10:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:28:38.141 23:10:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:28:38.141 No valid GPT data, bailing 00:28:38.141 23:10:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:28:38.141 23:10:18 -- scripts/common.sh@394 -- # pt= 00:28:38.141 23:10:18 -- scripts/common.sh@395 -- # return 1 00:28:38.141 23:10:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:28:38.141 1+0 records in 00:28:38.141 1+0 records out 00:28:38.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444109 s, 236 MB/s 00:28:38.141 23:10:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:28:38.141 23:10:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:28:38.141 23:10:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:28:38.141 23:10:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:28:38.141 23:10:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:28:38.399 No valid GPT data, bailing 00:28:38.399 23:10:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:28:38.399 23:10:18 -- scripts/common.sh@394 -- # pt= 00:28:38.399 23:10:18 -- scripts/common.sh@395 -- # return 1 00:28:38.399 23:10:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:28:38.399 1+0 records in 00:28:38.399 1+0 records out 00:28:38.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464953 s, 226 MB/s 00:28:38.400 23:10:18 -- spdk/autotest.sh@105 -- # sync 00:28:38.400 23:10:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:28:38.400 23:10:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:28:38.400 23:10:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:28:41.685 23:10:21 -- spdk/autotest.sh@111 -- # uname -s 00:28:41.685 23:10:21 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:28:41.685 23:10:21 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:28:41.685 23:10:21 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:28:41.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:41.943 Hugepages 00:28:41.943 node hugesize free / total 00:28:41.943 node0 1048576kB 0 / 0 00:28:41.943 node0 2048kB 0 / 0 00:28:41.943 00:28:41.943 Type BDF Vendor Device NUMA Driver Device Block devices 00:28:41.943 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:28:41.943 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:28:42.201 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:28:42.201 23:10:22 -- spdk/autotest.sh@117 -- # uname -s 00:28:42.201 23:10:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:28:42.201 23:10:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:28:42.201 23:10:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:42.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:43.080 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:43.080 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:43.080 23:10:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:28:44.465 23:10:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:28:44.465 23:10:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:28:44.465 23:10:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:28:44.465 23:10:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:28:44.465 23:10:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:44.465 23:10:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:44.465 23:10:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:44.465 23:10:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:44.465 23:10:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:44.465 23:10:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:28:44.465 23:10:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:44.465 23:10:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:44.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:44.723 Waiting for block devices as requested 00:28:44.723 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:44.723 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:44.980 23:10:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:28:44.980 23:10:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:28:44.980 23:10:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:28:44.980 23:10:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:28:44.980 23:10:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:28:44.980 23:10:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:28:44.980 23:10:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:28:44.980 23:10:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:28:44.980 23:10:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:28:44.980 23:10:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:28:44.980 23:10:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:28:44.980 23:10:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:28:44.980 23:10:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:28:44.980 23:10:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:28:44.980 23:10:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:28:44.980 23:10:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:28:44.980 23:10:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:28:44.980 23:10:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:28:44.980 23:10:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:28:44.980 23:10:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:28:44.980 23:10:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:28:44.980 23:10:25 -- common/autotest_common.sh@1543 -- # continue 00:28:44.980 23:10:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:28:44.980 23:10:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:28:44.980 23:10:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:28:44.980 23:10:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:28:44.980 23:10:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:28:44.980 23:10:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:28:44.980 23:10:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:28:44.980 23:10:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:28:44.980 23:10:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:28:44.980 23:10:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:28:44.980 23:10:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:28:44.980 23:10:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:28:44.980 23:10:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:28:44.980 23:10:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:28:44.980 23:10:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:28:44.980 23:10:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:28:44.980 23:10:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:28:44.980 23:10:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:28:44.980 23:10:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:28:44.980 23:10:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:28:44.980 23:10:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:28:44.980 23:10:25 -- common/autotest_common.sh@1543 -- # continue 00:28:44.980 23:10:25 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:28:44.980 23:10:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.980 23:10:25 -- common/autotest_common.sh@10 -- # set +x 00:28:44.980 23:10:25 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:28:44.980 23:10:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.980 23:10:25 -- common/autotest_common.sh@10 -- # set +x 00:28:44.980 23:10:25 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:45.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:45.916 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:45.916 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:45.916 23:10:26 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:28:45.916 23:10:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.916 23:10:26 -- common/autotest_common.sh@10 -- # set +x 00:28:45.916 23:10:26 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:28:45.916 23:10:26 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:28:45.916 23:10:26 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:28:45.916 23:10:26 -- common/autotest_common.sh@1563 -- # bdfs=() 00:28:45.916 23:10:26 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:28:45.916 23:10:26 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:28:45.916 23:10:26 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:28:45.916 23:10:26 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:28:45.916 23:10:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:45.916 23:10:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:45.916 23:10:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:46.175 23:10:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:46.175 23:10:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:46.175 23:10:26 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:28:46.175 23:10:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:46.175 23:10:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:28:46.175 23:10:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:28:46.175 23:10:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:28:46.175 23:10:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:28:46.175 23:10:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:28:46.175 23:10:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:28:46.175 23:10:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:28:46.175 23:10:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:28:46.175 23:10:26 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:28:46.175 23:10:26 -- common/autotest_common.sh@1572 -- # return 0 00:28:46.175 23:10:26 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:28:46.175 23:10:26 -- common/autotest_common.sh@1580 -- # return 0 00:28:46.175 23:10:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:28:46.175 23:10:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:28:46.175 23:10:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:28:46.175 23:10:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:28:46.175 23:10:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:28:46.175 23:10:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.175 23:10:26 -- common/autotest_common.sh@10 -- # set +x 00:28:46.175 23:10:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:28:46.175 23:10:26 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:28:46.175 23:10:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:46.175 23:10:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.175 23:10:26 -- common/autotest_common.sh@10 -- # set +x 00:28:46.175 ************************************ 00:28:46.175 START TEST env 00:28:46.175 ************************************ 00:28:46.175 23:10:26 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:28:46.175 * Looking for test storage... 00:28:46.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:28:46.175 23:10:26 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:46.175 23:10:26 env -- common/autotest_common.sh@1711 -- # lcov --version 00:28:46.175 23:10:26 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:46.433 23:10:26 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:46.433 23:10:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.433 23:10:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.433 23:10:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.433 23:10:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.433 23:10:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.433 23:10:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.433 23:10:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.433 23:10:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.433 23:10:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.433 23:10:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.433 23:10:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.433 23:10:26 env -- scripts/common.sh@344 -- # case "$op" in 00:28:46.433 23:10:26 env -- scripts/common.sh@345 -- # : 1 00:28:46.433 23:10:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.433 23:10:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.433 23:10:26 env -- scripts/common.sh@365 -- # decimal 1 00:28:46.433 23:10:26 env -- scripts/common.sh@353 -- # local d=1 00:28:46.433 23:10:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.433 23:10:26 env -- scripts/common.sh@355 -- # echo 1 00:28:46.433 23:10:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.433 23:10:26 env -- scripts/common.sh@366 -- # decimal 2 00:28:46.433 23:10:26 env -- scripts/common.sh@353 -- # local d=2 00:28:46.433 23:10:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.433 23:10:26 env -- scripts/common.sh@355 -- # echo 2 00:28:46.433 23:10:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.433 23:10:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.433 23:10:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.433 23:10:26 env -- scripts/common.sh@368 -- # return 0 00:28:46.433 23:10:26 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.433 23:10:26 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:46.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.433 --rc genhtml_branch_coverage=1 00:28:46.433 --rc genhtml_function_coverage=1 00:28:46.433 --rc genhtml_legend=1 00:28:46.433 --rc geninfo_all_blocks=1 00:28:46.433 --rc geninfo_unexecuted_blocks=1 00:28:46.433 00:28:46.433 ' 00:28:46.433 23:10:26 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:46.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.433 --rc genhtml_branch_coverage=1 00:28:46.433 --rc genhtml_function_coverage=1 00:28:46.433 --rc genhtml_legend=1 00:28:46.433 --rc geninfo_all_blocks=1 00:28:46.433 --rc geninfo_unexecuted_blocks=1 00:28:46.433 00:28:46.433 ' 00:28:46.433 23:10:26 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:46.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.433 --rc genhtml_branch_coverage=1 00:28:46.433 --rc genhtml_function_coverage=1 00:28:46.433 --rc genhtml_legend=1 00:28:46.433 --rc geninfo_all_blocks=1 00:28:46.433 --rc geninfo_unexecuted_blocks=1 00:28:46.433 00:28:46.433 ' 00:28:46.433 23:10:26 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:46.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.433 --rc genhtml_branch_coverage=1 00:28:46.433 --rc genhtml_function_coverage=1 00:28:46.433 --rc genhtml_legend=1 00:28:46.433 --rc geninfo_all_blocks=1 00:28:46.433 --rc geninfo_unexecuted_blocks=1 00:28:46.433 00:28:46.433 ' 00:28:46.433 23:10:26 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:28:46.433 23:10:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:46.433 23:10:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.433 23:10:26 env -- common/autotest_common.sh@10 -- # set +x 00:28:46.433 ************************************ 00:28:46.433 START TEST env_memory 00:28:46.433 ************************************ 00:28:46.433 23:10:26 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:28:46.433 00:28:46.433 00:28:46.433 CUnit - A unit testing framework for C - Version 2.1-3 00:28:46.433 http://cunit.sourceforge.net/ 00:28:46.433 00:28:46.433 00:28:46.433 Suite: memory 00:28:46.433 Test: alloc and free memory map ...[2024-12-09 23:10:26.978511] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:28:46.433 passed 00:28:46.433 Test: mem map translation ...[2024-12-09 23:10:27.030055] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:28:46.433 [2024-12-09 23:10:27.030167] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:28:46.433 [2024-12-09 23:10:27.030269] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:28:46.433 [2024-12-09 23:10:27.030301] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:28:46.693 passed 00:28:46.693 Test: mem map registration ...[2024-12-09 23:10:27.106361] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:28:46.693 [2024-12-09 23:10:27.106579] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:28:46.693 passed 00:28:46.693 Test: mem map adjacent registrations ...passed 00:28:46.693 00:28:46.693 Run Summary: Type Total Ran Passed Failed Inactive 00:28:46.693 suites 1 1 n/a 0 0 00:28:46.693 tests 4 4 4 0 0 00:28:46.693 asserts 152 152 152 0 n/a 00:28:46.693 00:28:46.693 Elapsed time = 0.258 seconds 00:28:46.693 00:28:46.693 real 0m0.317s 00:28:46.693 user 0m0.270s 00:28:46.693 sys 0m0.033s 00:28:46.693 23:10:27 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:46.693 ************************************ 00:28:46.693 END TEST env_memory 00:28:46.693 23:10:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:28:46.693 ************************************ 00:28:46.693 23:10:27 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:28:46.693 23:10:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:46.693 23:10:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.693 23:10:27 env -- common/autotest_common.sh@10 -- # set +x 00:28:46.693 ************************************ 00:28:46.693 START TEST env_vtophys 00:28:46.693 ************************************ 00:28:46.693 23:10:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:28:46.953 EAL: lib.eal log level changed from notice to debug 00:28:46.953 EAL: Detected lcore 0 as core 0 on socket 0 00:28:46.953 EAL: Detected lcore 1 as core 0 on socket 0 00:28:46.953 EAL: Detected lcore 2 as core 0 on socket 0 00:28:46.953 EAL: Detected lcore 3 as core 0 on socket 0 00:28:46.953 EAL: Detected lcore 4 as core 0 on socket 0 00:28:46.953 EAL: Detected lcore 5 as core 0 on socket 0 00:28:46.953 EAL: Detected lcore 6 as core 0 on socket 0 00:28:46.953 EAL: Detected lcore 7 as core 0 on socket 0 00:28:46.953 EAL: Detected lcore 8 as core 0 on socket 0 00:28:46.953 EAL: Detected lcore 9 as core 0 on socket 0 00:28:46.953 EAL: Maximum logical cores by configuration: 128 00:28:46.953 EAL: Detected CPU lcores: 10 00:28:46.953 EAL: Detected NUMA nodes: 1 00:28:46.953 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:28:46.953 EAL: Detected shared linkage of DPDK 00:28:46.953 EAL: No shared files mode enabled, IPC will be disabled 00:28:46.953 EAL: Selected IOVA mode 'PA' 00:28:46.953 EAL: Probing VFIO support... 00:28:46.953 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:28:46.953 EAL: VFIO modules not loaded, skipping VFIO support... 00:28:46.953 EAL: Ask a virtual area of 0x2e000 bytes 00:28:46.953 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:28:46.953 EAL: Setting up physically contiguous memory... 00:28:46.953 EAL: Setting maximum number of open files to 524288 00:28:46.953 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:28:46.953 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:28:46.953 EAL: Ask a virtual area of 0x61000 bytes 00:28:46.953 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:28:46.953 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:28:46.953 EAL: Ask a virtual area of 0x400000000 bytes 00:28:46.953 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:28:46.953 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:28:46.953 EAL: Ask a virtual area of 0x61000 bytes 00:28:46.953 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:28:46.953 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:28:46.953 EAL: Ask a virtual area of 0x400000000 bytes 00:28:46.953 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:28:46.953 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:28:46.953 EAL: Ask a virtual area of 0x61000 bytes 00:28:46.953 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:28:46.953 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:28:46.953 EAL: Ask a virtual area of 0x400000000 bytes 00:28:46.953 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:28:46.953 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:28:46.953 EAL: Ask a virtual area of 0x61000 bytes 00:28:46.953 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:28:46.953 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:28:46.953 EAL: Ask a virtual area of 0x400000000 bytes 00:28:46.953 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:28:46.953 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:28:46.953 EAL: Hugepages will be freed exactly as allocated. 00:28:46.953 EAL: No shared files mode enabled, IPC is disabled 00:28:46.953 EAL: No shared files mode enabled, IPC is disabled 00:28:46.953 EAL: TSC frequency is ~2490000 KHz 00:28:46.953 EAL: Main lcore 0 is ready (tid=7fcafe0baa40;cpuset=[0]) 00:28:46.953 EAL: Trying to obtain current memory policy. 00:28:46.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:46.953 EAL: Restoring previous memory policy: 0 00:28:46.953 EAL: request: mp_malloc_sync 00:28:46.953 EAL: No shared files mode enabled, IPC is disabled 00:28:46.953 EAL: Heap on socket 0 was expanded by 2MB 00:28:46.953 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:28:46.953 EAL: No PCI address specified using 'addr=' in: bus=pci 00:28:46.953 EAL: Mem event callback 'spdk:(nil)' registered 00:28:46.953 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:28:46.953 00:28:46.953 00:28:46.953 CUnit - A unit testing framework for C - Version 2.1-3 00:28:46.953 http://cunit.sourceforge.net/ 00:28:46.953 00:28:46.953 00:28:46.953 Suite: components_suite 00:28:47.519 Test: vtophys_malloc_test ...passed 00:28:47.519 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:28:47.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:47.520 EAL: Restoring previous memory policy: 4 00:28:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.520 EAL: request: mp_malloc_sync 00:28:47.520 EAL: No shared files mode enabled, IPC is disabled 00:28:47.520 EAL: Heap on socket 0 was expanded by 4MB 00:28:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.520 EAL: request: mp_malloc_sync 00:28:47.520 EAL: No shared files mode enabled, IPC is disabled 00:28:47.520 EAL: Heap on socket 0 was shrunk by 4MB 00:28:47.520 EAL: Trying to obtain current memory policy. 00:28:47.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:47.520 EAL: Restoring previous memory policy: 4 00:28:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.520 EAL: request: mp_malloc_sync 00:28:47.520 EAL: No shared files mode enabled, IPC is disabled 00:28:47.520 EAL: Heap on socket 0 was expanded by 6MB 00:28:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.520 EAL: request: mp_malloc_sync 00:28:47.520 EAL: No shared files mode enabled, IPC is disabled 00:28:47.520 EAL: Heap on socket 0 was shrunk by 6MB 00:28:47.520 EAL: Trying to obtain current memory policy. 00:28:47.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:47.520 EAL: Restoring previous memory policy: 4 00:28:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.520 EAL: request: mp_malloc_sync 00:28:47.520 EAL: No shared files mode enabled, IPC is disabled 00:28:47.520 EAL: Heap on socket 0 was expanded by 10MB 00:28:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.520 EAL: request: mp_malloc_sync 00:28:47.520 EAL: No shared files mode enabled, IPC is disabled 00:28:47.520 EAL: Heap on socket 0 was shrunk by 10MB 00:28:47.520 EAL: Trying to obtain current memory policy. 00:28:47.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:47.520 EAL: Restoring previous memory policy: 4 00:28:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.520 EAL: request: mp_malloc_sync 00:28:47.520 EAL: No shared files mode enabled, IPC is disabled 00:28:47.520 EAL: Heap on socket 0 was expanded by 18MB 00:28:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.520 EAL: request: mp_malloc_sync 00:28:47.520 EAL: No shared files mode enabled, IPC is disabled 00:28:47.520 EAL: Heap on socket 0 was shrunk by 18MB 00:28:47.520 EAL: Trying to obtain current memory policy. 00:28:47.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:47.520 EAL: Restoring previous memory policy: 4 00:28:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.520 EAL: request: mp_malloc_sync 00:28:47.520 EAL: No shared files mode enabled, IPC is disabled 00:28:47.520 EAL: Heap on socket 0 was expanded by 34MB 00:28:47.778 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.778 EAL: request: mp_malloc_sync 00:28:47.778 EAL: No shared files mode enabled, IPC is disabled 00:28:47.778 EAL: Heap on socket 0 was shrunk by 34MB 00:28:47.778 EAL: Trying to obtain current memory policy. 00:28:47.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:47.778 EAL: Restoring previous memory policy: 4 00:28:47.778 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.778 EAL: request: mp_malloc_sync 00:28:47.778 EAL: No shared files mode enabled, IPC is disabled 00:28:47.778 EAL: Heap on socket 0 was expanded by 66MB 00:28:47.778 EAL: Calling mem event callback 'spdk:(nil)' 00:28:47.778 EAL: request: mp_malloc_sync 00:28:47.778 EAL: No shared files mode enabled, IPC is disabled 00:28:47.778 EAL: Heap on socket 0 was shrunk by 66MB 00:28:48.036 EAL: Trying to obtain current memory policy. 00:28:48.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:48.036 EAL: Restoring previous memory policy: 4 00:28:48.036 EAL: Calling mem event callback 'spdk:(nil)' 00:28:48.036 EAL: request: mp_malloc_sync 00:28:48.036 EAL: No shared files mode enabled, IPC is disabled 00:28:48.036 EAL: Heap on socket 0 was expanded by 130MB 00:28:48.295 EAL: Calling mem event callback 'spdk:(nil)' 00:28:48.295 EAL: request: mp_malloc_sync 00:28:48.295 EAL: No shared files mode enabled, IPC is disabled 00:28:48.295 EAL: Heap on socket 0 was shrunk by 130MB 00:28:48.555 EAL: Trying to obtain current memory policy. 00:28:48.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:48.555 EAL: Restoring previous memory policy: 4 00:28:48.555 EAL: Calling mem event callback 'spdk:(nil)' 00:28:48.555 EAL: request: mp_malloc_sync 00:28:48.555 EAL: No shared files mode enabled, IPC is disabled 00:28:48.555 EAL: Heap on socket 0 was expanded by 258MB 00:28:49.119 EAL: Calling mem event callback 'spdk:(nil)' 00:28:49.119 EAL: request: mp_malloc_sync 00:28:49.119 EAL: No shared files mode enabled, IPC is disabled 00:28:49.119 EAL: Heap on socket 0 was shrunk by 258MB 00:28:49.704 EAL: Trying to obtain current memory policy. 00:28:49.704 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:49.704 EAL: Restoring previous memory policy: 4 00:28:49.705 EAL: Calling mem event callback 'spdk:(nil)' 00:28:49.705 EAL: request: mp_malloc_sync 00:28:49.705 EAL: No shared files mode enabled, IPC is disabled 00:28:49.705 EAL: Heap on socket 0 was expanded by 514MB 00:28:50.649 EAL: Calling mem event callback 'spdk:(nil)' 00:28:50.649 EAL: request: mp_malloc_sync 00:28:50.649 EAL: No shared files mode enabled, IPC is disabled 00:28:50.649 EAL: Heap on socket 0 was shrunk by 514MB 00:28:51.587 EAL: Trying to obtain current memory policy. 00:28:51.587 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:51.845 EAL: Restoring previous memory policy: 4 00:28:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:28:51.845 EAL: request: mp_malloc_sync 00:28:51.845 EAL: No shared files mode enabled, IPC is disabled 00:28:51.845 EAL: Heap on socket 0 was expanded by 1026MB 00:28:53.803 EAL: Calling mem event callback 'spdk:(nil)' 00:28:53.803 EAL: request: mp_malloc_sync 00:28:53.803 EAL: No shared files mode enabled, IPC is disabled 00:28:53.803 EAL: Heap on socket 0 was shrunk by 1026MB 00:28:55.707 passed 00:28:55.707 00:28:55.707 Run Summary: Type Total Ran Passed Failed Inactive 00:28:55.707 suites 1 1 n/a 0 0 00:28:55.707 tests 2 2 2 0 0 00:28:55.708 asserts 5817 5817 5817 0 n/a 00:28:55.708 00:28:55.708 Elapsed time = 8.663 seconds 00:28:55.708 EAL: Calling mem event callback 'spdk:(nil)' 00:28:55.708 EAL: request: mp_malloc_sync 00:28:55.708 EAL: No shared files mode enabled, IPC is disabled 00:28:55.708 EAL: Heap on socket 0 was shrunk by 2MB 00:28:55.708 EAL: No shared files mode enabled, IPC is disabled 00:28:55.708 EAL: No shared files mode enabled, IPC is disabled 00:28:55.708 EAL: No shared files mode enabled, IPC is disabled 00:28:55.708 00:28:55.708 real 0m9.028s 00:28:55.708 user 0m7.922s 00:28:55.708 sys 0m0.936s 00:28:55.708 23:10:36 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.708 23:10:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:28:55.708 ************************************ 00:28:55.708 END TEST env_vtophys 00:28:55.708 ************************************ 00:28:55.967 23:10:36 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:28:55.967 23:10:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:55.967 23:10:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.967 23:10:36 env -- common/autotest_common.sh@10 -- # set +x 00:28:55.967 ************************************ 00:28:55.967 START TEST env_pci 00:28:55.967 ************************************ 00:28:55.967 23:10:36 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:28:55.967 00:28:55.967 00:28:55.967 CUnit - A unit testing framework for C - Version 2.1-3 00:28:55.967 http://cunit.sourceforge.net/ 00:28:55.967 00:28:55.967 00:28:55.967 Suite: pci 00:28:55.967 Test: pci_hook ...[2024-12-09 23:10:36.411005] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56604 has claimed it 00:28:55.967 EAL: Cannot find device (10000:00:01.0) 00:28:55.967 EAL: Failed to attach device on primary process 00:28:55.967 passed 00:28:55.967 00:28:55.967 Run Summary: Type Total Ran Passed Failed Inactive 00:28:55.967 suites 1 1 n/a 0 0 00:28:55.967 tests 1 1 1 0 0 00:28:55.967 asserts 25 25 25 0 n/a 00:28:55.967 00:28:55.967 Elapsed time = 0.010 seconds 00:28:55.967 00:28:55.967 real 0m0.111s 00:28:55.967 user 0m0.043s 00:28:55.967 sys 0m0.067s 00:28:55.967 ************************************ 00:28:55.967 END TEST env_pci 00:28:55.967 ************************************ 00:28:55.967 23:10:36 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.967 23:10:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:28:55.967 23:10:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:28:55.967 23:10:36 env -- env/env.sh@15 -- # uname 00:28:55.967 23:10:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:28:55.967 23:10:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:28:55.967 23:10:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:28:55.967 23:10:36 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:55.967 23:10:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.967 23:10:36 env -- common/autotest_common.sh@10 -- # set +x 00:28:55.967 ************************************ 00:28:55.967 START TEST env_dpdk_post_init 00:28:55.967 ************************************ 00:28:55.967 23:10:36 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:28:56.225 EAL: Detected CPU lcores: 10 00:28:56.225 EAL: Detected NUMA nodes: 1 00:28:56.225 EAL: Detected shared linkage of DPDK 00:28:56.225 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:28:56.225 EAL: Selected IOVA mode 'PA' 00:28:56.225 TELEMETRY: No legacy callbacks, legacy socket not created 00:28:56.225 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:28:56.225 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:28:56.484 Starting DPDK initialization... 00:28:56.484 Starting SPDK post initialization... 00:28:56.484 SPDK NVMe probe 00:28:56.484 Attaching to 0000:00:10.0 00:28:56.484 Attaching to 0000:00:11.0 00:28:56.484 Attached to 0000:00:10.0 00:28:56.484 Attached to 0000:00:11.0 00:28:56.484 Cleaning up... 00:28:56.484 00:28:56.484 real 0m0.321s 00:28:56.484 user 0m0.099s 00:28:56.484 sys 0m0.122s 00:28:56.484 23:10:36 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.484 ************************************ 00:28:56.484 END TEST env_dpdk_post_init 00:28:56.484 ************************************ 00:28:56.484 23:10:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.484 23:10:36 env -- env/env.sh@26 -- # uname 00:28:56.484 23:10:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:28:56.484 23:10:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:28:56.484 23:10:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:56.484 23:10:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.484 23:10:36 env -- common/autotest_common.sh@10 -- # set +x 00:28:56.484 ************************************ 00:28:56.484 START TEST env_mem_callbacks 00:28:56.484 ************************************ 00:28:56.484 23:10:36 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:28:56.484 EAL: Detected CPU lcores: 10 00:28:56.484 EAL: Detected NUMA nodes: 1 00:28:56.484 EAL: Detected shared linkage of DPDK 00:28:56.484 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:28:56.484 EAL: Selected IOVA mode 'PA' 00:28:56.743 00:28:56.743 00:28:56.743 CUnit - A unit testing framework for C - Version 2.1-3 00:28:56.743 http://cunit.sourceforge.net/ 00:28:56.743 00:28:56.743 00:28:56.743 Suite: memory 00:28:56.743 Test: test ... 00:28:56.743 register 0x200000200000 2097152 00:28:56.743 malloc 3145728 00:28:56.743 TELEMETRY: No legacy callbacks, legacy socket not created 00:28:56.743 register 0x200000400000 4194304 00:28:56.743 buf 0x2000004fffc0 len 3145728 PASSED 00:28:56.743 malloc 64 00:28:56.743 buf 0x2000004ffec0 len 64 PASSED 00:28:56.743 malloc 4194304 00:28:56.743 register 0x200000800000 6291456 00:28:56.743 buf 0x2000009fffc0 len 4194304 PASSED 00:28:56.743 free 0x2000004fffc0 3145728 00:28:56.743 free 0x2000004ffec0 64 00:28:56.743 unregister 0x200000400000 4194304 PASSED 00:28:56.743 free 0x2000009fffc0 4194304 00:28:56.743 unregister 0x200000800000 6291456 PASSED 00:28:56.743 malloc 8388608 00:28:56.743 register 0x200000400000 10485760 00:28:56.743 buf 0x2000005fffc0 len 8388608 PASSED 00:28:56.743 free 0x2000005fffc0 8388608 00:28:56.743 unregister 0x200000400000 10485760 PASSED 00:28:56.743 passed 00:28:56.743 00:28:56.743 Run Summary: Type Total Ran Passed Failed Inactive 00:28:56.743 suites 1 1 n/a 0 0 00:28:56.743 tests 1 1 1 0 0 00:28:56.743 asserts 15 15 15 0 n/a 00:28:56.743 00:28:56.743 Elapsed time = 0.085 seconds 00:28:56.743 00:28:56.743 real 0m0.306s 00:28:56.743 user 0m0.110s 00:28:56.743 sys 0m0.092s 00:28:56.743 23:10:37 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.744 ************************************ 00:28:56.744 END TEST env_mem_callbacks 00:28:56.744 ************************************ 00:28:56.744 23:10:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:28:56.744 ************************************ 00:28:56.744 END TEST env 00:28:56.744 ************************************ 00:28:56.744 00:28:56.744 real 0m10.638s 00:28:56.744 user 0m8.674s 00:28:56.744 sys 0m1.574s 00:28:56.744 23:10:37 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.744 23:10:37 env -- common/autotest_common.sh@10 -- # set +x 00:28:56.744 23:10:37 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:28:56.744 23:10:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:56.744 23:10:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.744 23:10:37 -- common/autotest_common.sh@10 -- # set +x 00:28:56.744 ************************************ 00:28:56.744 START TEST rpc 00:28:56.744 ************************************ 00:28:56.744 23:10:37 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:28:57.003 * Looking for test storage... 00:28:57.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:57.003 23:10:37 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:57.003 23:10:37 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:57.003 23:10:37 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:57.003 23:10:37 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:28:57.003 23:10:37 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:28:57.003 23:10:37 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:28:57.003 23:10:37 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:28:57.003 23:10:37 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:28:57.003 23:10:37 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:28:57.003 23:10:37 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:28:57.003 23:10:37 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:57.003 23:10:37 rpc -- scripts/common.sh@344 -- # case "$op" in 00:28:57.003 23:10:37 rpc -- scripts/common.sh@345 -- # : 1 00:28:57.003 23:10:37 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:57.003 23:10:37 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:57.003 23:10:37 rpc -- scripts/common.sh@365 -- # decimal 1 00:28:57.003 23:10:37 rpc -- scripts/common.sh@353 -- # local d=1 00:28:57.003 23:10:37 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:57.003 23:10:37 rpc -- scripts/common.sh@355 -- # echo 1 00:28:57.003 23:10:37 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:57.003 23:10:37 rpc -- scripts/common.sh@366 -- # decimal 2 00:28:57.003 23:10:37 rpc -- scripts/common.sh@353 -- # local d=2 00:28:57.003 23:10:37 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:57.003 23:10:37 rpc -- scripts/common.sh@355 -- # echo 2 00:28:57.003 23:10:37 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:57.003 23:10:37 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:57.003 23:10:37 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:57.003 23:10:37 rpc -- scripts/common.sh@368 -- # return 0 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:57.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.003 --rc genhtml_branch_coverage=1 00:28:57.003 --rc genhtml_function_coverage=1 00:28:57.003 --rc genhtml_legend=1 00:28:57.003 --rc geninfo_all_blocks=1 00:28:57.003 --rc geninfo_unexecuted_blocks=1 00:28:57.003 00:28:57.003 ' 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:57.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.003 --rc genhtml_branch_coverage=1 00:28:57.003 --rc genhtml_function_coverage=1 00:28:57.003 --rc genhtml_legend=1 00:28:57.003 --rc geninfo_all_blocks=1 00:28:57.003 --rc geninfo_unexecuted_blocks=1 00:28:57.003 00:28:57.003 ' 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:57.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.003 --rc genhtml_branch_coverage=1 00:28:57.003 --rc genhtml_function_coverage=1 00:28:57.003 --rc genhtml_legend=1 00:28:57.003 --rc geninfo_all_blocks=1 00:28:57.003 --rc geninfo_unexecuted_blocks=1 00:28:57.003 00:28:57.003 ' 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:57.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.003 --rc genhtml_branch_coverage=1 00:28:57.003 --rc genhtml_function_coverage=1 00:28:57.003 --rc genhtml_legend=1 00:28:57.003 --rc geninfo_all_blocks=1 00:28:57.003 --rc geninfo_unexecuted_blocks=1 00:28:57.003 00:28:57.003 ' 00:28:57.003 23:10:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56736 00:28:57.003 23:10:37 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:28:57.003 23:10:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:28:57.003 23:10:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56736 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@835 -- # '[' -z 56736 ']' 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.003 23:10:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:57.262 [2024-12-09 23:10:37.694626] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:28:57.262 [2024-12-09 23:10:37.694996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56736 ] 00:28:57.262 [2024-12-09 23:10:37.874380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.522 [2024-12-09 23:10:38.003207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:28:57.522 [2024-12-09 23:10:38.003285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56736' to capture a snapshot of events at runtime. 00:28:57.522 [2024-12-09 23:10:38.003312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.522 [2024-12-09 23:10:38.003326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.522 [2024-12-09 23:10:38.003354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56736 for offline analysis/debug. 00:28:57.522 [2024-12-09 23:10:38.004793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.456 23:10:38 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.456 23:10:38 rpc -- common/autotest_common.sh@868 -- # return 0 00:28:58.456 23:10:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:28:58.456 23:10:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:28:58.456 23:10:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:28:58.456 23:10:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:28:58.456 23:10:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:58.456 23:10:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.456 23:10:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:58.456 ************************************ 00:28:58.456 START TEST rpc_integrity 00:28:58.456 ************************************ 00:28:58.456 23:10:38 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:28:58.456 23:10:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:58.456 23:10:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.456 23:10:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:58.456 23:10:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.456 23:10:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:28:58.456 23:10:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:28:58.456 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:28:58.456 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:28:58.456 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.456 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:58.456 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.456 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:28:58.456 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:28:58.456 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.456 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:58.456 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.456 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:28:58.456 { 00:28:58.456 "name": "Malloc0", 00:28:58.456 "aliases": [ 00:28:58.456 "4ac14cf2-c0ea-4e36-8926-53dbad09fd48" 00:28:58.456 ], 00:28:58.456 "product_name": "Malloc disk", 00:28:58.456 "block_size": 512, 00:28:58.456 "num_blocks": 16384, 00:28:58.456 "uuid": "4ac14cf2-c0ea-4e36-8926-53dbad09fd48", 00:28:58.456 "assigned_rate_limits": { 00:28:58.456 "rw_ios_per_sec": 0, 00:28:58.456 "rw_mbytes_per_sec": 0, 00:28:58.456 "r_mbytes_per_sec": 0, 00:28:58.456 "w_mbytes_per_sec": 0 00:28:58.456 }, 00:28:58.456 "claimed": false, 00:28:58.456 "zoned": false, 00:28:58.456 "supported_io_types": { 00:28:58.456 "read": true, 00:28:58.456 "write": true, 00:28:58.456 "unmap": true, 00:28:58.456 "flush": true, 00:28:58.456 "reset": true, 00:28:58.456 "nvme_admin": false, 00:28:58.456 "nvme_io": false, 00:28:58.456 "nvme_io_md": false, 00:28:58.456 "write_zeroes": true, 00:28:58.456 "zcopy": true, 00:28:58.456 "get_zone_info": false, 00:28:58.456 "zone_management": false, 00:28:58.456 "zone_append": false, 00:28:58.456 "compare": false, 00:28:58.456 "compare_and_write": false, 00:28:58.456 "abort": true, 00:28:58.456 "seek_hole": false, 00:28:58.456 "seek_data": false, 00:28:58.456 "copy": true, 00:28:58.456 "nvme_iov_md": false 00:28:58.456 }, 00:28:58.456 "memory_domains": [ 00:28:58.456 { 00:28:58.456 "dma_device_id": "system", 00:28:58.456 "dma_device_type": 1 00:28:58.456 }, 00:28:58.456 { 00:28:58.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.456 "dma_device_type": 2 00:28:58.456 } 00:28:58.456 ], 00:28:58.456 "driver_specific": {} 00:28:58.456 } 00:28:58.456 ]' 00:28:58.456 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:58.715 [2024-12-09 23:10:39.131315] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:28:58.715 [2024-12-09 23:10:39.131422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:58.715 [2024-12-09 23:10:39.131472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:58.715 [2024-12-09 23:10:39.131501] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:58.715 [2024-12-09 23:10:39.134437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:58.715 [2024-12-09 23:10:39.134499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:28:58.715 Passthru0 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:28:58.715 { 00:28:58.715 "name": "Malloc0", 00:28:58.715 "aliases": [ 00:28:58.715 "4ac14cf2-c0ea-4e36-8926-53dbad09fd48" 00:28:58.715 ], 00:28:58.715 "product_name": "Malloc disk", 00:28:58.715 "block_size": 512, 00:28:58.715 "num_blocks": 16384, 00:28:58.715 "uuid": "4ac14cf2-c0ea-4e36-8926-53dbad09fd48", 00:28:58.715 "assigned_rate_limits": { 00:28:58.715 "rw_ios_per_sec": 0, 00:28:58.715 "rw_mbytes_per_sec": 0, 00:28:58.715 "r_mbytes_per_sec": 0, 00:28:58.715 "w_mbytes_per_sec": 0 00:28:58.715 }, 00:28:58.715 "claimed": true, 00:28:58.715 "claim_type": "exclusive_write", 00:28:58.715 "zoned": false, 00:28:58.715 "supported_io_types": { 00:28:58.715 "read": true, 00:28:58.715 "write": true, 00:28:58.715 "unmap": true, 00:28:58.715 "flush": true, 00:28:58.715 "reset": true, 00:28:58.715 "nvme_admin": false, 00:28:58.715 "nvme_io": false, 00:28:58.715 "nvme_io_md": false, 00:28:58.715 "write_zeroes": true, 00:28:58.715 "zcopy": true, 00:28:58.715 "get_zone_info": false, 00:28:58.715 "zone_management": false, 00:28:58.715 "zone_append": false, 00:28:58.715 "compare": false, 00:28:58.715 "compare_and_write": false, 00:28:58.715 "abort": true, 00:28:58.715 "seek_hole": false, 00:28:58.715 "seek_data": false, 00:28:58.715 "copy": true, 00:28:58.715 "nvme_iov_md": false 00:28:58.715 }, 00:28:58.715 "memory_domains": [ 00:28:58.715 { 00:28:58.715 "dma_device_id": "system", 00:28:58.715 "dma_device_type": 1 00:28:58.715 }, 00:28:58.715 { 00:28:58.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.715 "dma_device_type": 2 00:28:58.715 } 00:28:58.715 ], 00:28:58.715 "driver_specific": {} 00:28:58.715 }, 00:28:58.715 { 00:28:58.715 "name": "Passthru0", 00:28:58.715 "aliases": [ 00:28:58.715 "ab528402-fcea-5621-bb15-c4ebb6e25208" 00:28:58.715 ], 00:28:58.715 "product_name": "passthru", 00:28:58.715 "block_size": 512, 00:28:58.715 "num_blocks": 16384, 00:28:58.715 "uuid": "ab528402-fcea-5621-bb15-c4ebb6e25208", 00:28:58.715 "assigned_rate_limits": { 00:28:58.715 "rw_ios_per_sec": 0, 00:28:58.715 "rw_mbytes_per_sec": 0, 00:28:58.715 "r_mbytes_per_sec": 0, 00:28:58.715 "w_mbytes_per_sec": 0 00:28:58.715 }, 00:28:58.715 "claimed": false, 00:28:58.715 "zoned": false, 00:28:58.715 "supported_io_types": { 00:28:58.715 "read": true, 00:28:58.715 "write": true, 00:28:58.715 "unmap": true, 00:28:58.715 "flush": true, 00:28:58.715 "reset": true, 00:28:58.715 "nvme_admin": false, 00:28:58.715 "nvme_io": false, 00:28:58.715 "nvme_io_md": false, 00:28:58.715 "write_zeroes": true, 00:28:58.715 "zcopy": true, 00:28:58.715 "get_zone_info": false, 00:28:58.715 "zone_management": false, 00:28:58.715 "zone_append": false, 00:28:58.715 "compare": false, 00:28:58.715 "compare_and_write": false, 00:28:58.715 "abort": true, 00:28:58.715 "seek_hole": false, 00:28:58.715 "seek_data": false, 00:28:58.715 "copy": true, 00:28:58.715 "nvme_iov_md": false 00:28:58.715 }, 00:28:58.715 "memory_domains": [ 00:28:58.715 { 00:28:58.715 "dma_device_id": "system", 00:28:58.715 "dma_device_type": 1 00:28:58.715 }, 00:28:58.715 { 00:28:58.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.715 "dma_device_type": 2 00:28:58.715 } 00:28:58.715 ], 00:28:58.715 "driver_specific": { 00:28:58.715 "passthru": { 00:28:58.715 "name": "Passthru0", 00:28:58.715 "base_bdev_name": "Malloc0" 00:28:58.715 } 00:28:58.715 } 00:28:58.715 } 00:28:58.715 ]' 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:28:58.715 23:10:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:28:58.715 00:28:58.715 real 0m0.354s 00:28:58.715 user 0m0.195s 00:28:58.715 ************************************ 00:28:58.715 END TEST rpc_integrity 00:28:58.715 ************************************ 00:28:58.715 sys 0m0.049s 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.715 23:10:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:58.975 23:10:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:28:58.975 23:10:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:58.975 23:10:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.975 23:10:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:58.975 ************************************ 00:28:58.975 START TEST rpc_plugins 00:28:58.975 ************************************ 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:28:58.975 { 00:28:58.975 "name": "Malloc1", 00:28:58.975 "aliases": [ 00:28:58.975 "b43f2ba0-fd7b-4a88-8dbd-484e2ec0f711" 00:28:58.975 ], 00:28:58.975 "product_name": "Malloc disk", 00:28:58.975 "block_size": 4096, 00:28:58.975 "num_blocks": 256, 00:28:58.975 "uuid": "b43f2ba0-fd7b-4a88-8dbd-484e2ec0f711", 00:28:58.975 "assigned_rate_limits": { 00:28:58.975 "rw_ios_per_sec": 0, 00:28:58.975 "rw_mbytes_per_sec": 0, 00:28:58.975 "r_mbytes_per_sec": 0, 00:28:58.975 "w_mbytes_per_sec": 0 00:28:58.975 }, 00:28:58.975 "claimed": false, 00:28:58.975 "zoned": false, 00:28:58.975 "supported_io_types": { 00:28:58.975 "read": true, 00:28:58.975 "write": true, 00:28:58.975 "unmap": true, 00:28:58.975 "flush": true, 00:28:58.975 "reset": true, 00:28:58.975 "nvme_admin": false, 00:28:58.975 "nvme_io": false, 00:28:58.975 "nvme_io_md": false, 00:28:58.975 "write_zeroes": true, 00:28:58.975 "zcopy": true, 00:28:58.975 "get_zone_info": false, 00:28:58.975 "zone_management": false, 00:28:58.975 "zone_append": false, 00:28:58.975 "compare": false, 00:28:58.975 "compare_and_write": false, 00:28:58.975 "abort": true, 00:28:58.975 "seek_hole": false, 00:28:58.975 "seek_data": false, 00:28:58.975 "copy": true, 00:28:58.975 "nvme_iov_md": false 00:28:58.975 }, 00:28:58.975 "memory_domains": [ 00:28:58.975 { 00:28:58.975 "dma_device_id": "system", 00:28:58.975 "dma_device_type": 1 00:28:58.975 }, 00:28:58.975 { 00:28:58.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.975 "dma_device_type": 2 00:28:58.975 } 00:28:58.975 ], 00:28:58.975 "driver_specific": {} 00:28:58.975 } 00:28:58.975 ]' 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:28:58.975 ************************************ 00:28:58.975 END TEST rpc_plugins 00:28:58.975 ************************************ 00:28:58.975 23:10:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:28:58.975 00:28:58.975 real 0m0.157s 00:28:58.975 user 0m0.083s 00:28:58.975 sys 0m0.026s 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.975 23:10:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:58.975 23:10:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:28:58.975 23:10:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:58.975 23:10:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.975 23:10:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:58.975 ************************************ 00:28:58.975 START TEST rpc_trace_cmd_test 00:28:58.975 ************************************ 00:28:58.975 23:10:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:28:58.975 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:28:58.975 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:28:58.975 23:10:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.975 23:10:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:28:59.234 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56736", 00:28:59.234 "tpoint_group_mask": "0x8", 00:28:59.234 "iscsi_conn": { 00:28:59.234 "mask": "0x2", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "scsi": { 00:28:59.234 "mask": "0x4", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "bdev": { 00:28:59.234 "mask": "0x8", 00:28:59.234 "tpoint_mask": "0xffffffffffffffff" 00:28:59.234 }, 00:28:59.234 "nvmf_rdma": { 00:28:59.234 "mask": "0x10", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "nvmf_tcp": { 00:28:59.234 "mask": "0x20", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "ftl": { 00:28:59.234 "mask": "0x40", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "blobfs": { 00:28:59.234 "mask": "0x80", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "dsa": { 00:28:59.234 "mask": "0x200", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "thread": { 00:28:59.234 "mask": "0x400", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "nvme_pcie": { 00:28:59.234 "mask": "0x800", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "iaa": { 00:28:59.234 "mask": "0x1000", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "nvme_tcp": { 00:28:59.234 "mask": "0x2000", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "bdev_nvme": { 00:28:59.234 "mask": "0x4000", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "sock": { 00:28:59.234 "mask": "0x8000", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "blob": { 00:28:59.234 "mask": "0x10000", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "bdev_raid": { 00:28:59.234 "mask": "0x20000", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 }, 00:28:59.234 "scheduler": { 00:28:59.234 "mask": "0x40000", 00:28:59.234 "tpoint_mask": "0x0" 00:28:59.234 } 00:28:59.234 }' 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:28:59.234 ************************************ 00:28:59.234 END TEST rpc_trace_cmd_test 00:28:59.234 ************************************ 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:28:59.234 00:28:59.234 real 0m0.224s 00:28:59.234 user 0m0.173s 00:28:59.234 sys 0m0.041s 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.234 23:10:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.493 23:10:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:28:59.493 23:10:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:28:59.493 23:10:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:28:59.493 23:10:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:59.493 23:10:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.493 23:10:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:59.493 ************************************ 00:28:59.493 START TEST rpc_daemon_integrity 00:28:59.493 ************************************ 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.493 23:10:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:28:59.493 { 00:28:59.493 "name": "Malloc2", 00:28:59.493 "aliases": [ 00:28:59.493 "b9189616-9757-472b-89e9-9cbbd40017bd" 00:28:59.493 ], 00:28:59.493 "product_name": "Malloc disk", 00:28:59.493 "block_size": 512, 00:28:59.493 "num_blocks": 16384, 00:28:59.493 "uuid": "b9189616-9757-472b-89e9-9cbbd40017bd", 00:28:59.493 "assigned_rate_limits": { 00:28:59.493 "rw_ios_per_sec": 0, 00:28:59.493 "rw_mbytes_per_sec": 0, 00:28:59.493 "r_mbytes_per_sec": 0, 00:28:59.493 "w_mbytes_per_sec": 0 00:28:59.493 }, 00:28:59.493 "claimed": false, 00:28:59.493 "zoned": false, 00:28:59.493 "supported_io_types": { 00:28:59.493 "read": true, 00:28:59.493 "write": true, 00:28:59.493 "unmap": true, 00:28:59.493 "flush": true, 00:28:59.493 "reset": true, 00:28:59.493 "nvme_admin": false, 00:28:59.493 "nvme_io": false, 00:28:59.493 "nvme_io_md": false, 00:28:59.493 "write_zeroes": true, 00:28:59.493 "zcopy": true, 00:28:59.493 "get_zone_info": false, 00:28:59.493 "zone_management": false, 00:28:59.493 "zone_append": false, 00:28:59.493 "compare": false, 00:28:59.493 "compare_and_write": false, 00:28:59.493 "abort": true, 00:28:59.493 "seek_hole": false, 00:28:59.493 "seek_data": false, 00:28:59.493 "copy": true, 00:28:59.493 "nvme_iov_md": false 00:28:59.493 }, 00:28:59.493 "memory_domains": [ 00:28:59.493 { 00:28:59.493 "dma_device_id": "system", 00:28:59.493 "dma_device_type": 1 00:28:59.493 }, 00:28:59.493 { 00:28:59.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:59.493 "dma_device_type": 2 00:28:59.493 } 00:28:59.493 ], 00:28:59.493 "driver_specific": {} 00:28:59.493 } 00:28:59.493 ]' 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:59.493 [2024-12-09 23:10:40.063610] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:28:59.493 [2024-12-09 23:10:40.063698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:59.493 [2024-12-09 23:10:40.063726] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:59.493 [2024-12-09 23:10:40.063742] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:59.493 [2024-12-09 23:10:40.066564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:59.493 [2024-12-09 23:10:40.066786] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:28:59.493 Passthru0 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.493 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:28:59.493 { 00:28:59.493 "name": "Malloc2", 00:28:59.493 "aliases": [ 00:28:59.493 "b9189616-9757-472b-89e9-9cbbd40017bd" 00:28:59.493 ], 00:28:59.493 "product_name": "Malloc disk", 00:28:59.493 "block_size": 512, 00:28:59.493 "num_blocks": 16384, 00:28:59.493 "uuid": "b9189616-9757-472b-89e9-9cbbd40017bd", 00:28:59.493 "assigned_rate_limits": { 00:28:59.493 "rw_ios_per_sec": 0, 00:28:59.493 "rw_mbytes_per_sec": 0, 00:28:59.493 "r_mbytes_per_sec": 0, 00:28:59.493 "w_mbytes_per_sec": 0 00:28:59.493 }, 00:28:59.493 "claimed": true, 00:28:59.493 "claim_type": "exclusive_write", 00:28:59.493 "zoned": false, 00:28:59.493 "supported_io_types": { 00:28:59.493 "read": true, 00:28:59.493 "write": true, 00:28:59.493 "unmap": true, 00:28:59.493 "flush": true, 00:28:59.493 "reset": true, 00:28:59.493 "nvme_admin": false, 00:28:59.493 "nvme_io": false, 00:28:59.493 "nvme_io_md": false, 00:28:59.493 "write_zeroes": true, 00:28:59.493 "zcopy": true, 00:28:59.493 "get_zone_info": false, 00:28:59.493 "zone_management": false, 00:28:59.493 "zone_append": false, 00:28:59.493 "compare": false, 00:28:59.493 "compare_and_write": false, 00:28:59.493 "abort": true, 00:28:59.493 "seek_hole": false, 00:28:59.493 "seek_data": false, 00:28:59.493 "copy": true, 00:28:59.493 "nvme_iov_md": false 00:28:59.493 }, 00:28:59.493 "memory_domains": [ 00:28:59.493 { 00:28:59.493 "dma_device_id": "system", 00:28:59.493 "dma_device_type": 1 00:28:59.493 }, 00:28:59.493 { 00:28:59.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:59.494 "dma_device_type": 2 00:28:59.494 } 00:28:59.494 ], 00:28:59.494 "driver_specific": {} 00:28:59.494 }, 00:28:59.494 { 00:28:59.494 "name": "Passthru0", 00:28:59.494 "aliases": [ 00:28:59.494 "c634e6c2-28b9-5e8d-a782-63a98f1a8e2e" 00:28:59.494 ], 00:28:59.494 "product_name": "passthru", 00:28:59.494 "block_size": 512, 00:28:59.494 "num_blocks": 16384, 00:28:59.494 "uuid": "c634e6c2-28b9-5e8d-a782-63a98f1a8e2e", 00:28:59.494 "assigned_rate_limits": { 00:28:59.494 "rw_ios_per_sec": 0, 00:28:59.494 "rw_mbytes_per_sec": 0, 00:28:59.494 "r_mbytes_per_sec": 0, 00:28:59.494 "w_mbytes_per_sec": 0 00:28:59.494 }, 00:28:59.494 "claimed": false, 00:28:59.494 "zoned": false, 00:28:59.494 "supported_io_types": { 00:28:59.494 "read": true, 00:28:59.494 "write": true, 00:28:59.494 "unmap": true, 00:28:59.494 "flush": true, 00:28:59.494 "reset": true, 00:28:59.494 "nvme_admin": false, 00:28:59.494 "nvme_io": false, 00:28:59.494 "nvme_io_md": false, 00:28:59.494 "write_zeroes": true, 00:28:59.494 "zcopy": true, 00:28:59.494 "get_zone_info": false, 00:28:59.494 "zone_management": false, 00:28:59.494 "zone_append": false, 00:28:59.494 "compare": false, 00:28:59.494 "compare_and_write": false, 00:28:59.494 "abort": true, 00:28:59.494 "seek_hole": false, 00:28:59.494 "seek_data": false, 00:28:59.494 "copy": true, 00:28:59.494 "nvme_iov_md": false 00:28:59.494 }, 00:28:59.494 "memory_domains": [ 00:28:59.494 { 00:28:59.494 "dma_device_id": "system", 00:28:59.494 "dma_device_type": 1 00:28:59.494 }, 00:28:59.494 { 00:28:59.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:59.494 "dma_device_type": 2 00:28:59.494 } 00:28:59.494 ], 00:28:59.494 "driver_specific": { 00:28:59.494 "passthru": { 00:28:59.494 "name": "Passthru0", 00:28:59.494 "base_bdev_name": "Malloc2" 00:28:59.494 } 00:28:59.494 } 00:28:59.494 } 00:28:59.494 ]' 00:28:59.494 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:28:59.753 ************************************ 00:28:59.753 END TEST rpc_daemon_integrity 00:28:59.753 ************************************ 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:28:59.753 00:28:59.753 real 0m0.358s 00:28:59.753 user 0m0.181s 00:28:59.753 sys 0m0.065s 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.753 23:10:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:59.753 23:10:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:59.753 23:10:40 rpc -- rpc/rpc.sh@84 -- # killprocess 56736 00:28:59.753 23:10:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 56736 ']' 00:28:59.753 23:10:40 rpc -- common/autotest_common.sh@958 -- # kill -0 56736 00:28:59.753 23:10:40 rpc -- common/autotest_common.sh@959 -- # uname 00:28:59.754 23:10:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.754 23:10:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56736 00:28:59.754 killing process with pid 56736 00:28:59.754 23:10:40 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.754 23:10:40 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.754 23:10:40 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56736' 00:28:59.754 23:10:40 rpc -- common/autotest_common.sh@973 -- # kill 56736 00:28:59.754 23:10:40 rpc -- common/autotest_common.sh@978 -- # wait 56736 00:29:03.039 00:29:03.039 real 0m5.576s 00:29:03.039 user 0m6.066s 00:29:03.039 sys 0m0.954s 00:29:03.039 ************************************ 00:29:03.039 END TEST rpc 00:29:03.039 ************************************ 00:29:03.039 23:10:42 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.039 23:10:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:03.039 23:10:42 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:29:03.039 23:10:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:03.039 23:10:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.039 23:10:42 -- common/autotest_common.sh@10 -- # set +x 00:29:03.039 ************************************ 00:29:03.039 START TEST skip_rpc 00:29:03.039 ************************************ 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:29:03.039 * Looking for test storage... 00:29:03.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.039 23:10:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:03.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.039 --rc genhtml_branch_coverage=1 00:29:03.039 --rc genhtml_function_coverage=1 00:29:03.039 --rc genhtml_legend=1 00:29:03.039 --rc geninfo_all_blocks=1 00:29:03.039 --rc geninfo_unexecuted_blocks=1 00:29:03.039 00:29:03.039 ' 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:03.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.039 --rc genhtml_branch_coverage=1 00:29:03.039 --rc genhtml_function_coverage=1 00:29:03.039 --rc genhtml_legend=1 00:29:03.039 --rc geninfo_all_blocks=1 00:29:03.039 --rc geninfo_unexecuted_blocks=1 00:29:03.039 00:29:03.039 ' 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:03.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.039 --rc genhtml_branch_coverage=1 00:29:03.039 --rc genhtml_function_coverage=1 00:29:03.039 --rc genhtml_legend=1 00:29:03.039 --rc geninfo_all_blocks=1 00:29:03.039 --rc geninfo_unexecuted_blocks=1 00:29:03.039 00:29:03.039 ' 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:03.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.039 --rc genhtml_branch_coverage=1 00:29:03.039 --rc genhtml_function_coverage=1 00:29:03.039 --rc genhtml_legend=1 00:29:03.039 --rc geninfo_all_blocks=1 00:29:03.039 --rc geninfo_unexecuted_blocks=1 00:29:03.039 00:29:03.039 ' 00:29:03.039 23:10:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:03.039 23:10:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:29:03.039 23:10:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.039 23:10:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:03.039 ************************************ 00:29:03.039 START TEST skip_rpc 00:29:03.039 ************************************ 00:29:03.039 23:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:29:03.039 23:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56971 00:29:03.039 23:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:29:03.039 23:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:03.039 23:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:29:03.039 [2024-12-09 23:10:43.385962] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:03.039 [2024-12-09 23:10:43.386113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56971 ] 00:29:03.039 [2024-12-09 23:10:43.573756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.297 [2024-12-09 23:10:43.703930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.562 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56971 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56971 ']' 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56971 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56971 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56971' 00:29:08.563 killing process with pid 56971 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56971 00:29:08.563 23:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56971 00:29:10.466 ************************************ 00:29:10.466 END TEST skip_rpc 00:29:10.466 ************************************ 00:29:10.466 00:29:10.466 real 0m7.566s 00:29:10.466 user 0m7.063s 00:29:10.466 sys 0m0.418s 00:29:10.466 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.466 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:10.466 23:10:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:29:10.466 23:10:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:10.466 23:10:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.466 23:10:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:10.466 ************************************ 00:29:10.466 START TEST skip_rpc_with_json 00:29:10.466 ************************************ 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57075 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57075 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57075 ']' 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.466 23:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:10.466 [2024-12-09 23:10:51.007602] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:10.466 [2024-12-09 23:10:51.007952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57075 ] 00:29:10.725 [2024-12-09 23:10:51.178658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.725 [2024-12-09 23:10:51.303687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.700 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:11.701 [2024-12-09 23:10:52.214267] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:29:11.701 request: 00:29:11.701 { 00:29:11.701 "trtype": "tcp", 00:29:11.701 "method": "nvmf_get_transports", 00:29:11.701 "req_id": 1 00:29:11.701 } 00:29:11.701 Got JSON-RPC error response 00:29:11.701 response: 00:29:11.701 { 00:29:11.701 "code": -19, 00:29:11.701 "message": "No such device" 00:29:11.701 } 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:11.701 [2024-12-09 23:10:52.226417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.701 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:11.960 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.960 23:10:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:11.960 { 00:29:11.960 "subsystems": [ 00:29:11.960 { 00:29:11.960 "subsystem": "fsdev", 00:29:11.960 "config": [ 00:29:11.960 { 00:29:11.960 "method": "fsdev_set_opts", 00:29:11.960 "params": { 00:29:11.960 "fsdev_io_pool_size": 65535, 00:29:11.960 "fsdev_io_cache_size": 256 00:29:11.960 } 00:29:11.960 } 00:29:11.960 ] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "keyring", 00:29:11.960 "config": [] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "iobuf", 00:29:11.960 "config": [ 00:29:11.960 { 00:29:11.960 "method": "iobuf_set_options", 00:29:11.960 "params": { 00:29:11.960 "small_pool_count": 8192, 00:29:11.960 "large_pool_count": 1024, 00:29:11.960 "small_bufsize": 8192, 00:29:11.960 "large_bufsize": 135168, 00:29:11.960 "enable_numa": false 00:29:11.960 } 00:29:11.960 } 00:29:11.960 ] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "sock", 00:29:11.960 "config": [ 00:29:11.960 { 00:29:11.960 "method": "sock_set_default_impl", 00:29:11.960 "params": { 00:29:11.960 "impl_name": "posix" 00:29:11.960 } 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "method": "sock_impl_set_options", 00:29:11.960 "params": { 00:29:11.960 "impl_name": "ssl", 00:29:11.960 "recv_buf_size": 4096, 00:29:11.960 "send_buf_size": 4096, 00:29:11.960 "enable_recv_pipe": true, 00:29:11.960 "enable_quickack": false, 00:29:11.960 "enable_placement_id": 0, 00:29:11.960 "enable_zerocopy_send_server": true, 00:29:11.960 "enable_zerocopy_send_client": false, 00:29:11.960 "zerocopy_threshold": 0, 00:29:11.960 "tls_version": 0, 00:29:11.960 "enable_ktls": false 00:29:11.960 } 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "method": "sock_impl_set_options", 00:29:11.960 "params": { 00:29:11.960 "impl_name": "posix", 00:29:11.960 "recv_buf_size": 2097152, 00:29:11.960 "send_buf_size": 2097152, 00:29:11.960 "enable_recv_pipe": true, 00:29:11.960 "enable_quickack": false, 00:29:11.960 "enable_placement_id": 0, 00:29:11.960 "enable_zerocopy_send_server": true, 00:29:11.960 "enable_zerocopy_send_client": false, 00:29:11.960 "zerocopy_threshold": 0, 00:29:11.960 "tls_version": 0, 00:29:11.960 "enable_ktls": false 00:29:11.960 } 00:29:11.960 } 00:29:11.960 ] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "vmd", 00:29:11.960 "config": [] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "accel", 00:29:11.960 "config": [ 00:29:11.960 { 00:29:11.960 "method": "accel_set_options", 00:29:11.960 "params": { 00:29:11.960 "small_cache_size": 128, 00:29:11.960 "large_cache_size": 16, 00:29:11.960 "task_count": 2048, 00:29:11.960 "sequence_count": 2048, 00:29:11.960 "buf_count": 2048 00:29:11.960 } 00:29:11.960 } 00:29:11.960 ] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "bdev", 00:29:11.960 "config": [ 00:29:11.960 { 00:29:11.960 "method": "bdev_set_options", 00:29:11.960 "params": { 00:29:11.960 "bdev_io_pool_size": 65535, 00:29:11.960 "bdev_io_cache_size": 256, 00:29:11.960 "bdev_auto_examine": true, 00:29:11.960 "iobuf_small_cache_size": 128, 00:29:11.960 "iobuf_large_cache_size": 16 00:29:11.960 } 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "method": "bdev_raid_set_options", 00:29:11.960 "params": { 00:29:11.960 "process_window_size_kb": 1024, 00:29:11.960 "process_max_bandwidth_mb_sec": 0 00:29:11.960 } 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "method": "bdev_iscsi_set_options", 00:29:11.960 "params": { 00:29:11.960 "timeout_sec": 30 00:29:11.960 } 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "method": "bdev_nvme_set_options", 00:29:11.960 "params": { 00:29:11.960 "action_on_timeout": "none", 00:29:11.960 "timeout_us": 0, 00:29:11.960 "timeout_admin_us": 0, 00:29:11.960 "keep_alive_timeout_ms": 10000, 00:29:11.960 "arbitration_burst": 0, 00:29:11.960 "low_priority_weight": 0, 00:29:11.960 "medium_priority_weight": 0, 00:29:11.960 "high_priority_weight": 0, 00:29:11.960 "nvme_adminq_poll_period_us": 10000, 00:29:11.960 "nvme_ioq_poll_period_us": 0, 00:29:11.960 "io_queue_requests": 0, 00:29:11.960 "delay_cmd_submit": true, 00:29:11.960 "transport_retry_count": 4, 00:29:11.960 "bdev_retry_count": 3, 00:29:11.960 "transport_ack_timeout": 0, 00:29:11.960 "ctrlr_loss_timeout_sec": 0, 00:29:11.960 "reconnect_delay_sec": 0, 00:29:11.960 "fast_io_fail_timeout_sec": 0, 00:29:11.960 "disable_auto_failback": false, 00:29:11.960 "generate_uuids": false, 00:29:11.960 "transport_tos": 0, 00:29:11.960 "nvme_error_stat": false, 00:29:11.960 "rdma_srq_size": 0, 00:29:11.960 "io_path_stat": false, 00:29:11.960 "allow_accel_sequence": false, 00:29:11.960 "rdma_max_cq_size": 0, 00:29:11.960 "rdma_cm_event_timeout_ms": 0, 00:29:11.960 "dhchap_digests": [ 00:29:11.960 "sha256", 00:29:11.960 "sha384", 00:29:11.960 "sha512" 00:29:11.960 ], 00:29:11.960 "dhchap_dhgroups": [ 00:29:11.960 "null", 00:29:11.960 "ffdhe2048", 00:29:11.960 "ffdhe3072", 00:29:11.960 "ffdhe4096", 00:29:11.960 "ffdhe6144", 00:29:11.960 "ffdhe8192" 00:29:11.960 ] 00:29:11.960 } 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "method": "bdev_nvme_set_hotplug", 00:29:11.960 "params": { 00:29:11.960 "period_us": 100000, 00:29:11.960 "enable": false 00:29:11.960 } 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "method": "bdev_wait_for_examine" 00:29:11.960 } 00:29:11.960 ] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "scsi", 00:29:11.960 "config": null 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "scheduler", 00:29:11.960 "config": [ 00:29:11.960 { 00:29:11.960 "method": "framework_set_scheduler", 00:29:11.960 "params": { 00:29:11.960 "name": "static" 00:29:11.960 } 00:29:11.960 } 00:29:11.960 ] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "vhost_scsi", 00:29:11.960 "config": [] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "vhost_blk", 00:29:11.960 "config": [] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "ublk", 00:29:11.960 "config": [] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "nbd", 00:29:11.960 "config": [] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "nvmf", 00:29:11.960 "config": [ 00:29:11.960 { 00:29:11.960 "method": "nvmf_set_config", 00:29:11.960 "params": { 00:29:11.960 "discovery_filter": "match_any", 00:29:11.960 "admin_cmd_passthru": { 00:29:11.960 "identify_ctrlr": false 00:29:11.960 }, 00:29:11.960 "dhchap_digests": [ 00:29:11.960 "sha256", 00:29:11.960 "sha384", 00:29:11.960 "sha512" 00:29:11.960 ], 00:29:11.960 "dhchap_dhgroups": [ 00:29:11.960 "null", 00:29:11.960 "ffdhe2048", 00:29:11.960 "ffdhe3072", 00:29:11.960 "ffdhe4096", 00:29:11.960 "ffdhe6144", 00:29:11.960 "ffdhe8192" 00:29:11.960 ] 00:29:11.960 } 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "method": "nvmf_set_max_subsystems", 00:29:11.960 "params": { 00:29:11.960 "max_subsystems": 1024 00:29:11.960 } 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "method": "nvmf_set_crdt", 00:29:11.960 "params": { 00:29:11.960 "crdt1": 0, 00:29:11.960 "crdt2": 0, 00:29:11.960 "crdt3": 0 00:29:11.960 } 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "method": "nvmf_create_transport", 00:29:11.960 "params": { 00:29:11.960 "trtype": "TCP", 00:29:11.960 "max_queue_depth": 128, 00:29:11.960 "max_io_qpairs_per_ctrlr": 127, 00:29:11.960 "in_capsule_data_size": 4096, 00:29:11.960 "max_io_size": 131072, 00:29:11.960 "io_unit_size": 131072, 00:29:11.960 "max_aq_depth": 128, 00:29:11.960 "num_shared_buffers": 511, 00:29:11.960 "buf_cache_size": 4294967295, 00:29:11.960 "dif_insert_or_strip": false, 00:29:11.960 "zcopy": false, 00:29:11.960 "c2h_success": true, 00:29:11.960 "sock_priority": 0, 00:29:11.960 "abort_timeout_sec": 1, 00:29:11.960 "ack_timeout": 0, 00:29:11.960 "data_wr_pool_size": 0 00:29:11.960 } 00:29:11.960 } 00:29:11.960 ] 00:29:11.960 }, 00:29:11.960 { 00:29:11.960 "subsystem": "iscsi", 00:29:11.960 "config": [ 00:29:11.960 { 00:29:11.960 "method": "iscsi_set_options", 00:29:11.960 "params": { 00:29:11.960 "node_base": "iqn.2016-06.io.spdk", 00:29:11.960 "max_sessions": 128, 00:29:11.960 "max_connections_per_session": 2, 00:29:11.960 "max_queue_depth": 64, 00:29:11.960 "default_time2wait": 2, 00:29:11.960 "default_time2retain": 20, 00:29:11.960 "first_burst_length": 8192, 00:29:11.960 "immediate_data": true, 00:29:11.960 "allow_duplicated_isid": false, 00:29:11.960 "error_recovery_level": 0, 00:29:11.960 "nop_timeout": 60, 00:29:11.960 "nop_in_interval": 30, 00:29:11.960 "disable_chap": false, 00:29:11.960 "require_chap": false, 00:29:11.961 "mutual_chap": false, 00:29:11.961 "chap_group": 0, 00:29:11.961 "max_large_datain_per_connection": 64, 00:29:11.961 "max_r2t_per_connection": 4, 00:29:11.961 "pdu_pool_size": 36864, 00:29:11.961 "immediate_data_pool_size": 16384, 00:29:11.961 "data_out_pool_size": 2048 00:29:11.961 } 00:29:11.961 } 00:29:11.961 ] 00:29:11.961 } 00:29:11.961 ] 00:29:11.961 } 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57075 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57075 ']' 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57075 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57075 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:11.961 killing process with pid 57075 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57075' 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57075 00:29:11.961 23:10:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57075 00:29:14.490 23:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57131 00:29:14.490 23:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:29:14.490 23:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:19.761 23:11:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57131 00:29:19.761 23:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57131 ']' 00:29:19.761 23:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57131 00:29:19.761 23:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:29:19.761 23:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.761 23:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57131 00:29:19.761 killing process with pid 57131 00:29:19.761 23:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:19.762 23:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:19.762 23:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57131' 00:29:19.762 23:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57131 00:29:19.762 23:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57131 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:29:22.301 00:29:22.301 real 0m11.753s 00:29:22.301 user 0m11.167s 00:29:22.301 sys 0m0.925s 00:29:22.301 ************************************ 00:29:22.301 END TEST skip_rpc_with_json 00:29:22.301 ************************************ 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:22.301 23:11:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:29:22.301 23:11:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:22.301 23:11:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:22.301 23:11:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:22.301 ************************************ 00:29:22.301 START TEST skip_rpc_with_delay 00:29:22.301 ************************************ 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:29:22.301 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:29:22.301 [2024-12-09 23:11:02.856187] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:29:22.561 ************************************ 00:29:22.561 END TEST skip_rpc_with_delay 00:29:22.561 ************************************ 00:29:22.561 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:29:22.561 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:22.561 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:22.561 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:22.561 00:29:22.561 real 0m0.205s 00:29:22.561 user 0m0.103s 00:29:22.561 sys 0m0.100s 00:29:22.561 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:22.561 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:29:22.561 23:11:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:29:22.561 23:11:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:29:22.561 23:11:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:29:22.561 23:11:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:22.561 23:11:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:22.561 23:11:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:22.561 ************************************ 00:29:22.561 START TEST exit_on_failed_rpc_init 00:29:22.561 ************************************ 00:29:22.561 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:29:22.561 23:11:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57270 00:29:22.561 23:11:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57270 00:29:22.561 23:11:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:22.561 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57270 ']' 00:29:22.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.561 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.561 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.561 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.561 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.561 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:29:22.561 [2024-12-09 23:11:03.126410] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:22.561 [2024-12-09 23:11:03.126756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57270 ] 00:29:22.820 [2024-12-09 23:11:03.312026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.820 [2024-12-09 23:11:03.437098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:29:23.759 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:29:24.020 [2024-12-09 23:11:04.489412] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:24.020 [2024-12-09 23:11:04.489826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57288 ] 00:29:24.278 [2024-12-09 23:11:04.673932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.278 [2024-12-09 23:11:04.800582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.279 [2024-12-09 23:11:04.800922] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:29:24.279 [2024-12-09 23:11:04.800951] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:29:24.279 [2024-12-09 23:11:04.800970] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57270 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57270 ']' 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57270 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57270 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57270' 00:29:24.536 killing process with pid 57270 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57270 00:29:24.536 23:11:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57270 00:29:27.132 ************************************ 00:29:27.132 END TEST exit_on_failed_rpc_init 00:29:27.132 ************************************ 00:29:27.132 00:29:27.132 real 0m4.591s 00:29:27.132 user 0m4.922s 00:29:27.132 sys 0m0.652s 00:29:27.132 23:11:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.132 23:11:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:29:27.132 23:11:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:27.132 00:29:27.132 real 0m24.647s 00:29:27.132 user 0m23.488s 00:29:27.132 sys 0m2.406s 00:29:27.132 23:11:07 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.132 ************************************ 00:29:27.132 END TEST skip_rpc 00:29:27.132 ************************************ 00:29:27.132 23:11:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:27.132 23:11:07 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:29:27.132 23:11:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:27.132 23:11:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.132 23:11:07 -- common/autotest_common.sh@10 -- # set +x 00:29:27.132 ************************************ 00:29:27.132 START TEST rpc_client 00:29:27.132 ************************************ 00:29:27.132 23:11:07 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:29:27.391 * Looking for test storage... 00:29:27.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:29:27.391 23:11:07 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:27.391 23:11:07 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:29:27.391 23:11:07 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:27.391 23:11:07 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.391 23:11:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:29:27.391 23:11:07 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.391 23:11:07 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:27.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.391 --rc genhtml_branch_coverage=1 00:29:27.391 --rc genhtml_function_coverage=1 00:29:27.391 --rc genhtml_legend=1 00:29:27.391 --rc geninfo_all_blocks=1 00:29:27.391 --rc geninfo_unexecuted_blocks=1 00:29:27.391 00:29:27.391 ' 00:29:27.391 23:11:07 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:27.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.391 --rc genhtml_branch_coverage=1 00:29:27.391 --rc genhtml_function_coverage=1 00:29:27.391 --rc genhtml_legend=1 00:29:27.391 --rc geninfo_all_blocks=1 00:29:27.391 --rc geninfo_unexecuted_blocks=1 00:29:27.392 00:29:27.392 ' 00:29:27.392 23:11:07 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:27.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.392 --rc genhtml_branch_coverage=1 00:29:27.392 --rc genhtml_function_coverage=1 00:29:27.392 --rc genhtml_legend=1 00:29:27.392 --rc geninfo_all_blocks=1 00:29:27.392 --rc geninfo_unexecuted_blocks=1 00:29:27.392 00:29:27.392 ' 00:29:27.392 23:11:07 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:27.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.392 --rc genhtml_branch_coverage=1 00:29:27.392 --rc genhtml_function_coverage=1 00:29:27.392 --rc genhtml_legend=1 00:29:27.392 --rc geninfo_all_blocks=1 00:29:27.392 --rc geninfo_unexecuted_blocks=1 00:29:27.392 00:29:27.392 ' 00:29:27.392 23:11:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:29:27.392 OK 00:29:27.392 23:11:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:29:27.392 00:29:27.392 real 0m0.298s 00:29:27.392 user 0m0.168s 00:29:27.392 sys 0m0.147s 00:29:27.392 23:11:08 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.392 23:11:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:29:27.392 ************************************ 00:29:27.392 END TEST rpc_client 00:29:27.392 ************************************ 00:29:27.650 23:11:08 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:29:27.650 23:11:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:27.650 23:11:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.650 23:11:08 -- common/autotest_common.sh@10 -- # set +x 00:29:27.650 ************************************ 00:29:27.650 START TEST json_config 00:29:27.650 ************************************ 00:29:27.650 23:11:08 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:29:27.650 23:11:08 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:27.650 23:11:08 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:29:27.650 23:11:08 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:27.650 23:11:08 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:27.650 23:11:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.650 23:11:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.650 23:11:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.650 23:11:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.650 23:11:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.650 23:11:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.650 23:11:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.650 23:11:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.650 23:11:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.650 23:11:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.650 23:11:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.650 23:11:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:29:27.650 23:11:08 json_config -- scripts/common.sh@345 -- # : 1 00:29:27.650 23:11:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.650 23:11:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.650 23:11:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:29:27.650 23:11:08 json_config -- scripts/common.sh@353 -- # local d=1 00:29:27.650 23:11:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.650 23:11:08 json_config -- scripts/common.sh@355 -- # echo 1 00:29:27.650 23:11:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.650 23:11:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:29:27.650 23:11:08 json_config -- scripts/common.sh@353 -- # local d=2 00:29:27.650 23:11:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.650 23:11:08 json_config -- scripts/common.sh@355 -- # echo 2 00:29:27.650 23:11:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.650 23:11:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.650 23:11:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.650 23:11:08 json_config -- scripts/common.sh@368 -- # return 0 00:29:27.650 23:11:08 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.650 23:11:08 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:27.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.650 --rc genhtml_branch_coverage=1 00:29:27.650 --rc genhtml_function_coverage=1 00:29:27.650 --rc genhtml_legend=1 00:29:27.650 --rc geninfo_all_blocks=1 00:29:27.650 --rc geninfo_unexecuted_blocks=1 00:29:27.650 00:29:27.650 ' 00:29:27.650 23:11:08 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:27.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.650 --rc genhtml_branch_coverage=1 00:29:27.650 --rc genhtml_function_coverage=1 00:29:27.650 --rc genhtml_legend=1 00:29:27.650 --rc geninfo_all_blocks=1 00:29:27.650 --rc geninfo_unexecuted_blocks=1 00:29:27.650 00:29:27.650 ' 00:29:27.650 23:11:08 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:27.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.650 --rc genhtml_branch_coverage=1 00:29:27.651 --rc genhtml_function_coverage=1 00:29:27.651 --rc genhtml_legend=1 00:29:27.651 --rc geninfo_all_blocks=1 00:29:27.651 --rc geninfo_unexecuted_blocks=1 00:29:27.651 00:29:27.651 ' 00:29:27.651 23:11:08 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:27.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.651 --rc genhtml_branch_coverage=1 00:29:27.651 --rc genhtml_function_coverage=1 00:29:27.651 --rc genhtml_legend=1 00:29:27.651 --rc geninfo_all_blocks=1 00:29:27.651 --rc geninfo_unexecuted_blocks=1 00:29:27.651 00:29:27.651 ' 00:29:27.651 23:11:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.651 23:11:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.909 23:11:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:315856d4-f6fe-49fc-aa4d-9adba20f06a2 00:29:27.909 23:11:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=315856d4-f6fe-49fc-aa4d-9adba20f06a2 00:29:27.909 23:11:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.909 23:11:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.909 23:11:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:27.909 23:11:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.909 23:11:08 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:27.909 23:11:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.909 23:11:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.909 23:11:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.909 23:11:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.909 23:11:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.910 23:11:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.910 23:11:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.910 23:11:08 json_config -- paths/export.sh@5 -- # export PATH 00:29:27.910 23:11:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.910 23:11:08 json_config -- nvmf/common.sh@51 -- # : 0 00:29:27.910 23:11:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.910 23:11:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.910 23:11:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.910 23:11:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.910 23:11:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.910 23:11:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:27.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:27.910 23:11:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.910 23:11:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.910 23:11:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.910 23:11:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:29:27.910 23:11:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:29:27.910 23:11:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:29:27.910 23:11:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:29:27.910 23:11:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:29:27.910 23:11:08 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:29:27.910 WARNING: No tests are enabled so not running JSON configuration tests 00:29:27.910 23:11:08 json_config -- json_config/json_config.sh@28 -- # exit 0 00:29:27.910 00:29:27.910 real 0m0.227s 00:29:27.910 user 0m0.142s 00:29:27.910 sys 0m0.087s 00:29:27.910 23:11:08 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.910 23:11:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:27.910 ************************************ 00:29:27.910 END TEST json_config 00:29:27.910 ************************************ 00:29:27.910 23:11:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:29:27.910 23:11:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:27.910 23:11:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.910 23:11:08 -- common/autotest_common.sh@10 -- # set +x 00:29:27.910 ************************************ 00:29:27.910 START TEST json_config_extra_key 00:29:27.910 ************************************ 00:29:27.910 23:11:08 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:29:27.910 23:11:08 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:27.910 23:11:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:29:27.910 23:11:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:28.169 23:11:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:29:28.169 23:11:08 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.169 23:11:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:28.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.169 --rc genhtml_branch_coverage=1 00:29:28.169 --rc genhtml_function_coverage=1 00:29:28.169 --rc genhtml_legend=1 00:29:28.169 --rc geninfo_all_blocks=1 00:29:28.169 --rc geninfo_unexecuted_blocks=1 00:29:28.169 00:29:28.169 ' 00:29:28.169 23:11:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:28.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.169 --rc genhtml_branch_coverage=1 00:29:28.169 --rc genhtml_function_coverage=1 00:29:28.169 --rc genhtml_legend=1 00:29:28.169 --rc geninfo_all_blocks=1 00:29:28.169 --rc geninfo_unexecuted_blocks=1 00:29:28.169 00:29:28.169 ' 00:29:28.169 23:11:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:28.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.169 --rc genhtml_branch_coverage=1 00:29:28.169 --rc genhtml_function_coverage=1 00:29:28.169 --rc genhtml_legend=1 00:29:28.169 --rc geninfo_all_blocks=1 00:29:28.169 --rc geninfo_unexecuted_blocks=1 00:29:28.169 00:29:28.169 ' 00:29:28.169 23:11:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:28.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.169 --rc genhtml_branch_coverage=1 00:29:28.169 --rc genhtml_function_coverage=1 00:29:28.169 --rc genhtml_legend=1 00:29:28.169 --rc geninfo_all_blocks=1 00:29:28.169 --rc geninfo_unexecuted_blocks=1 00:29:28.169 00:29:28.169 ' 00:29:28.169 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:315856d4-f6fe-49fc-aa4d-9adba20f06a2 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=315856d4-f6fe-49fc-aa4d-9adba20f06a2 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.169 23:11:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.169 23:11:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.169 23:11:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.169 23:11:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.169 23:11:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:29:28.169 23:11:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.169 23:11:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.170 23:11:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.170 23:11:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.170 23:11:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.170 23:11:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.170 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.170 23:11:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.170 23:11:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.170 23:11:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:29:28.170 INFO: launching applications... 00:29:28.170 23:11:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57498 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:29:28.170 Waiting for target to run... 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57498 /var/tmp/spdk_tgt.sock 00:29:28.170 23:11:08 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57498 ']' 00:29:28.170 23:11:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:29:28.170 23:11:08 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:29:28.170 23:11:08 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:29:28.170 23:11:08 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:29:28.170 23:11:08 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.170 23:11:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:29:28.170 [2024-12-09 23:11:08.731320] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:28.170 [2024-12-09 23:11:08.731464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57498 ] 00:29:28.737 [2024-12-09 23:11:09.128682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.737 [2024-12-09 23:11:09.233126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.687 00:29:29.687 INFO: shutting down applications... 00:29:29.687 23:11:09 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.687 23:11:09 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:29:29.687 23:11:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:29:29.687 23:11:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:29:29.687 23:11:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:29:29.687 23:11:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:29:29.687 23:11:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:29:29.687 23:11:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57498 ]] 00:29:29.687 23:11:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57498 00:29:29.687 23:11:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:29:29.687 23:11:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:29.687 23:11:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57498 00:29:29.687 23:11:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:29.945 23:11:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:29.945 23:11:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:29.945 23:11:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57498 00:29:29.945 23:11:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:30.511 23:11:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:30.511 23:11:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:30.511 23:11:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57498 00:29:30.511 23:11:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:31.076 23:11:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:31.076 23:11:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:31.076 23:11:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57498 00:29:31.076 23:11:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:31.643 23:11:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:31.643 23:11:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:31.643 23:11:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57498 00:29:31.643 23:11:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:31.901 23:11:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:31.901 23:11:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:31.901 23:11:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57498 00:29:31.901 23:11:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:32.469 23:11:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:32.469 23:11:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:32.469 23:11:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57498 00:29:32.469 23:11:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:29:32.469 23:11:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:29:32.469 23:11:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:29:32.469 SPDK target shutdown done 00:29:32.469 23:11:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:29:32.469 Success 00:29:32.469 23:11:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:29:32.469 ************************************ 00:29:32.469 END TEST json_config_extra_key 00:29:32.469 ************************************ 00:29:32.469 00:29:32.469 real 0m4.654s 00:29:32.469 user 0m4.211s 00:29:32.469 sys 0m0.616s 00:29:32.469 23:11:13 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.469 23:11:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:29:32.469 23:11:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:29:32.469 23:11:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:32.469 23:11:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.469 23:11:13 -- common/autotest_common.sh@10 -- # set +x 00:29:32.728 ************************************ 00:29:32.728 START TEST alias_rpc 00:29:32.728 ************************************ 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:29:32.728 * Looking for test storage... 00:29:32.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.728 23:11:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:32.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.728 --rc genhtml_branch_coverage=1 00:29:32.728 --rc genhtml_function_coverage=1 00:29:32.728 --rc genhtml_legend=1 00:29:32.728 --rc geninfo_all_blocks=1 00:29:32.728 --rc geninfo_unexecuted_blocks=1 00:29:32.728 00:29:32.728 ' 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:32.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.728 --rc genhtml_branch_coverage=1 00:29:32.728 --rc genhtml_function_coverage=1 00:29:32.728 --rc genhtml_legend=1 00:29:32.728 --rc geninfo_all_blocks=1 00:29:32.728 --rc geninfo_unexecuted_blocks=1 00:29:32.728 00:29:32.728 ' 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:32.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.728 --rc genhtml_branch_coverage=1 00:29:32.728 --rc genhtml_function_coverage=1 00:29:32.728 --rc genhtml_legend=1 00:29:32.728 --rc geninfo_all_blocks=1 00:29:32.728 --rc geninfo_unexecuted_blocks=1 00:29:32.728 00:29:32.728 ' 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:32.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.728 --rc genhtml_branch_coverage=1 00:29:32.728 --rc genhtml_function_coverage=1 00:29:32.728 --rc genhtml_legend=1 00:29:32.728 --rc geninfo_all_blocks=1 00:29:32.728 --rc geninfo_unexecuted_blocks=1 00:29:32.728 00:29:32.728 ' 00:29:32.728 23:11:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:29:32.728 23:11:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57615 00:29:32.728 23:11:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:32.728 23:11:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57615 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57615 ']' 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.728 23:11:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:32.986 [2024-12-09 23:11:13.446607] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:32.986 [2024-12-09 23:11:13.446936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57615 ] 00:29:33.327 [2024-12-09 23:11:13.631232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.327 [2024-12-09 23:11:13.748882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.264 23:11:14 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.264 23:11:14 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:29:34.264 23:11:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:29:34.522 23:11:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57615 00:29:34.522 23:11:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57615 ']' 00:29:34.522 23:11:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57615 00:29:34.522 23:11:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:29:34.522 23:11:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.522 23:11:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57615 00:29:34.522 killing process with pid 57615 00:29:34.522 23:11:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:34.522 23:11:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:34.522 23:11:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57615' 00:29:34.522 23:11:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 57615 00:29:34.522 23:11:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 57615 00:29:37.056 ************************************ 00:29:37.056 END TEST alias_rpc 00:29:37.056 ************************************ 00:29:37.056 00:29:37.056 real 0m4.394s 00:29:37.056 user 0m4.388s 00:29:37.056 sys 0m0.640s 00:29:37.056 23:11:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.056 23:11:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:37.056 23:11:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:29:37.056 23:11:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:29:37.056 23:11:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:37.056 23:11:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:37.056 23:11:17 -- common/autotest_common.sh@10 -- # set +x 00:29:37.056 ************************************ 00:29:37.056 START TEST spdkcli_tcp 00:29:37.056 ************************************ 00:29:37.056 23:11:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:29:37.056 * Looking for test storage... 00:29:37.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:37.056 23:11:17 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:37.056 23:11:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:29:37.056 23:11:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.320 23:11:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:37.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.320 --rc genhtml_branch_coverage=1 00:29:37.320 --rc genhtml_function_coverage=1 00:29:37.320 --rc genhtml_legend=1 00:29:37.320 --rc geninfo_all_blocks=1 00:29:37.320 --rc geninfo_unexecuted_blocks=1 00:29:37.320 00:29:37.320 ' 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:37.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.320 --rc genhtml_branch_coverage=1 00:29:37.320 --rc genhtml_function_coverage=1 00:29:37.320 --rc genhtml_legend=1 00:29:37.320 --rc geninfo_all_blocks=1 00:29:37.320 --rc geninfo_unexecuted_blocks=1 00:29:37.320 00:29:37.320 ' 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:37.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.320 --rc genhtml_branch_coverage=1 00:29:37.320 --rc genhtml_function_coverage=1 00:29:37.320 --rc genhtml_legend=1 00:29:37.320 --rc geninfo_all_blocks=1 00:29:37.320 --rc geninfo_unexecuted_blocks=1 00:29:37.320 00:29:37.320 ' 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:37.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.320 --rc genhtml_branch_coverage=1 00:29:37.320 --rc genhtml_function_coverage=1 00:29:37.320 --rc genhtml_legend=1 00:29:37.320 --rc geninfo_all_blocks=1 00:29:37.320 --rc geninfo_unexecuted_blocks=1 00:29:37.320 00:29:37.320 ' 00:29:37.320 23:11:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:37.320 23:11:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:37.320 23:11:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:37.320 23:11:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:29:37.320 23:11:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:29:37.320 23:11:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:37.320 23:11:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:37.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.320 23:11:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57722 00:29:37.320 23:11:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57722 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57722 ']' 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.320 23:11:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.321 23:11:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:29:37.321 23:11:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.321 23:11:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:37.321 [2024-12-09 23:11:17.883085] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:37.321 [2024-12-09 23:11:17.883217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57722 ] 00:29:37.578 [2024-12-09 23:11:18.067388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:37.578 [2024-12-09 23:11:18.191593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.578 [2024-12-09 23:11:18.191642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.515 23:11:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.515 23:11:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:29:38.515 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57745 00:29:38.515 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:29:38.515 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:29:38.774 [ 00:29:38.774 "bdev_malloc_delete", 00:29:38.774 "bdev_malloc_create", 00:29:38.774 "bdev_null_resize", 00:29:38.774 "bdev_null_delete", 00:29:38.774 "bdev_null_create", 00:29:38.774 "bdev_nvme_cuse_unregister", 00:29:38.774 "bdev_nvme_cuse_register", 00:29:38.774 "bdev_opal_new_user", 00:29:38.774 "bdev_opal_set_lock_state", 00:29:38.774 "bdev_opal_delete", 00:29:38.774 "bdev_opal_get_info", 00:29:38.774 "bdev_opal_create", 00:29:38.774 "bdev_nvme_opal_revert", 00:29:38.774 "bdev_nvme_opal_init", 00:29:38.774 "bdev_nvme_send_cmd", 00:29:38.774 "bdev_nvme_set_keys", 00:29:38.774 "bdev_nvme_get_path_iostat", 00:29:38.774 "bdev_nvme_get_mdns_discovery_info", 00:29:38.774 "bdev_nvme_stop_mdns_discovery", 00:29:38.774 "bdev_nvme_start_mdns_discovery", 00:29:38.774 "bdev_nvme_set_multipath_policy", 00:29:38.775 "bdev_nvme_set_preferred_path", 00:29:38.775 "bdev_nvme_get_io_paths", 00:29:38.775 "bdev_nvme_remove_error_injection", 00:29:38.775 "bdev_nvme_add_error_injection", 00:29:38.775 "bdev_nvme_get_discovery_info", 00:29:38.775 "bdev_nvme_stop_discovery", 00:29:38.775 "bdev_nvme_start_discovery", 00:29:38.775 "bdev_nvme_get_controller_health_info", 00:29:38.775 "bdev_nvme_disable_controller", 00:29:38.775 "bdev_nvme_enable_controller", 00:29:38.775 "bdev_nvme_reset_controller", 00:29:38.775 "bdev_nvme_get_transport_statistics", 00:29:38.775 "bdev_nvme_apply_firmware", 00:29:38.775 "bdev_nvme_detach_controller", 00:29:38.775 "bdev_nvme_get_controllers", 00:29:38.775 "bdev_nvme_attach_controller", 00:29:38.775 "bdev_nvme_set_hotplug", 00:29:38.775 "bdev_nvme_set_options", 00:29:38.775 "bdev_passthru_delete", 00:29:38.775 "bdev_passthru_create", 00:29:38.775 "bdev_lvol_set_parent_bdev", 00:29:38.775 "bdev_lvol_set_parent", 00:29:38.775 "bdev_lvol_check_shallow_copy", 00:29:38.775 "bdev_lvol_start_shallow_copy", 00:29:38.775 "bdev_lvol_grow_lvstore", 00:29:38.775 "bdev_lvol_get_lvols", 00:29:38.775 "bdev_lvol_get_lvstores", 00:29:38.775 "bdev_lvol_delete", 00:29:38.775 "bdev_lvol_set_read_only", 00:29:38.775 "bdev_lvol_resize", 00:29:38.775 "bdev_lvol_decouple_parent", 00:29:38.775 "bdev_lvol_inflate", 00:29:38.775 "bdev_lvol_rename", 00:29:38.775 "bdev_lvol_clone_bdev", 00:29:38.775 "bdev_lvol_clone", 00:29:38.775 "bdev_lvol_snapshot", 00:29:38.775 "bdev_lvol_create", 00:29:38.775 "bdev_lvol_delete_lvstore", 00:29:38.775 "bdev_lvol_rename_lvstore", 00:29:38.775 "bdev_lvol_create_lvstore", 00:29:38.775 "bdev_raid_set_options", 00:29:38.775 "bdev_raid_remove_base_bdev", 00:29:38.775 "bdev_raid_add_base_bdev", 00:29:38.775 "bdev_raid_delete", 00:29:38.775 "bdev_raid_create", 00:29:38.775 "bdev_raid_get_bdevs", 00:29:38.775 "bdev_error_inject_error", 00:29:38.775 "bdev_error_delete", 00:29:38.775 "bdev_error_create", 00:29:38.775 "bdev_split_delete", 00:29:38.775 "bdev_split_create", 00:29:38.775 "bdev_delay_delete", 00:29:38.775 "bdev_delay_create", 00:29:38.775 "bdev_delay_update_latency", 00:29:38.775 "bdev_zone_block_delete", 00:29:38.775 "bdev_zone_block_create", 00:29:38.775 "blobfs_create", 00:29:38.775 "blobfs_detect", 00:29:38.775 "blobfs_set_cache_size", 00:29:38.775 "bdev_aio_delete", 00:29:38.775 "bdev_aio_rescan", 00:29:38.775 "bdev_aio_create", 00:29:38.775 "bdev_ftl_set_property", 00:29:38.775 "bdev_ftl_get_properties", 00:29:38.775 "bdev_ftl_get_stats", 00:29:38.775 "bdev_ftl_unmap", 00:29:38.775 "bdev_ftl_unload", 00:29:38.775 "bdev_ftl_delete", 00:29:38.775 "bdev_ftl_load", 00:29:38.775 "bdev_ftl_create", 00:29:38.775 "bdev_virtio_attach_controller", 00:29:38.775 "bdev_virtio_scsi_get_devices", 00:29:38.775 "bdev_virtio_detach_controller", 00:29:38.775 "bdev_virtio_blk_set_hotplug", 00:29:38.775 "bdev_iscsi_delete", 00:29:38.775 "bdev_iscsi_create", 00:29:38.775 "bdev_iscsi_set_options", 00:29:38.775 "accel_error_inject_error", 00:29:38.775 "ioat_scan_accel_module", 00:29:38.775 "dsa_scan_accel_module", 00:29:38.775 "iaa_scan_accel_module", 00:29:38.775 "keyring_file_remove_key", 00:29:38.775 "keyring_file_add_key", 00:29:38.775 "keyring_linux_set_options", 00:29:38.775 "fsdev_aio_delete", 00:29:38.775 "fsdev_aio_create", 00:29:38.775 "iscsi_get_histogram", 00:29:38.775 "iscsi_enable_histogram", 00:29:38.775 "iscsi_set_options", 00:29:38.775 "iscsi_get_auth_groups", 00:29:38.775 "iscsi_auth_group_remove_secret", 00:29:38.775 "iscsi_auth_group_add_secret", 00:29:38.775 "iscsi_delete_auth_group", 00:29:38.775 "iscsi_create_auth_group", 00:29:38.775 "iscsi_set_discovery_auth", 00:29:38.775 "iscsi_get_options", 00:29:38.775 "iscsi_target_node_request_logout", 00:29:38.775 "iscsi_target_node_set_redirect", 00:29:38.775 "iscsi_target_node_set_auth", 00:29:38.775 "iscsi_target_node_add_lun", 00:29:38.775 "iscsi_get_stats", 00:29:38.775 "iscsi_get_connections", 00:29:38.775 "iscsi_portal_group_set_auth", 00:29:38.775 "iscsi_start_portal_group", 00:29:38.775 "iscsi_delete_portal_group", 00:29:38.775 "iscsi_create_portal_group", 00:29:38.775 "iscsi_get_portal_groups", 00:29:38.775 "iscsi_delete_target_node", 00:29:38.775 "iscsi_target_node_remove_pg_ig_maps", 00:29:38.775 "iscsi_target_node_add_pg_ig_maps", 00:29:38.775 "iscsi_create_target_node", 00:29:38.775 "iscsi_get_target_nodes", 00:29:38.775 "iscsi_delete_initiator_group", 00:29:38.775 "iscsi_initiator_group_remove_initiators", 00:29:38.775 "iscsi_initiator_group_add_initiators", 00:29:38.775 "iscsi_create_initiator_group", 00:29:38.775 "iscsi_get_initiator_groups", 00:29:38.775 "nvmf_set_crdt", 00:29:38.775 "nvmf_set_config", 00:29:38.775 "nvmf_set_max_subsystems", 00:29:38.775 "nvmf_stop_mdns_prr", 00:29:38.775 "nvmf_publish_mdns_prr", 00:29:38.775 "nvmf_subsystem_get_listeners", 00:29:38.775 "nvmf_subsystem_get_qpairs", 00:29:38.775 "nvmf_subsystem_get_controllers", 00:29:38.775 "nvmf_get_stats", 00:29:38.775 "nvmf_get_transports", 00:29:38.775 "nvmf_create_transport", 00:29:38.775 "nvmf_get_targets", 00:29:38.775 "nvmf_delete_target", 00:29:38.775 "nvmf_create_target", 00:29:38.775 "nvmf_subsystem_allow_any_host", 00:29:38.775 "nvmf_subsystem_set_keys", 00:29:38.775 "nvmf_subsystem_remove_host", 00:29:38.775 "nvmf_subsystem_add_host", 00:29:38.775 "nvmf_ns_remove_host", 00:29:38.775 "nvmf_ns_add_host", 00:29:38.775 "nvmf_subsystem_remove_ns", 00:29:38.775 "nvmf_subsystem_set_ns_ana_group", 00:29:38.775 "nvmf_subsystem_add_ns", 00:29:38.775 "nvmf_subsystem_listener_set_ana_state", 00:29:38.775 "nvmf_discovery_get_referrals", 00:29:38.775 "nvmf_discovery_remove_referral", 00:29:38.775 "nvmf_discovery_add_referral", 00:29:38.775 "nvmf_subsystem_remove_listener", 00:29:38.775 "nvmf_subsystem_add_listener", 00:29:38.775 "nvmf_delete_subsystem", 00:29:38.775 "nvmf_create_subsystem", 00:29:38.775 "nvmf_get_subsystems", 00:29:38.775 "env_dpdk_get_mem_stats", 00:29:38.775 "nbd_get_disks", 00:29:38.775 "nbd_stop_disk", 00:29:38.775 "nbd_start_disk", 00:29:38.775 "ublk_recover_disk", 00:29:38.775 "ublk_get_disks", 00:29:38.775 "ublk_stop_disk", 00:29:38.775 "ublk_start_disk", 00:29:38.775 "ublk_destroy_target", 00:29:38.775 "ublk_create_target", 00:29:38.775 "virtio_blk_create_transport", 00:29:38.775 "virtio_blk_get_transports", 00:29:38.775 "vhost_controller_set_coalescing", 00:29:38.775 "vhost_get_controllers", 00:29:38.775 "vhost_delete_controller", 00:29:38.775 "vhost_create_blk_controller", 00:29:38.775 "vhost_scsi_controller_remove_target", 00:29:38.775 "vhost_scsi_controller_add_target", 00:29:38.775 "vhost_start_scsi_controller", 00:29:38.775 "vhost_create_scsi_controller", 00:29:38.775 "thread_set_cpumask", 00:29:38.775 "scheduler_set_options", 00:29:38.775 "framework_get_governor", 00:29:38.775 "framework_get_scheduler", 00:29:38.775 "framework_set_scheduler", 00:29:38.775 "framework_get_reactors", 00:29:38.775 "thread_get_io_channels", 00:29:38.775 "thread_get_pollers", 00:29:38.775 "thread_get_stats", 00:29:38.775 "framework_monitor_context_switch", 00:29:38.775 "spdk_kill_instance", 00:29:38.775 "log_enable_timestamps", 00:29:38.775 "log_get_flags", 00:29:38.775 "log_clear_flag", 00:29:38.775 "log_set_flag", 00:29:38.775 "log_get_level", 00:29:38.775 "log_set_level", 00:29:38.775 "log_get_print_level", 00:29:38.775 "log_set_print_level", 00:29:38.775 "framework_enable_cpumask_locks", 00:29:38.775 "framework_disable_cpumask_locks", 00:29:38.775 "framework_wait_init", 00:29:38.775 "framework_start_init", 00:29:38.775 "scsi_get_devices", 00:29:38.775 "bdev_get_histogram", 00:29:38.775 "bdev_enable_histogram", 00:29:38.775 "bdev_set_qos_limit", 00:29:38.775 "bdev_set_qd_sampling_period", 00:29:38.775 "bdev_get_bdevs", 00:29:38.775 "bdev_reset_iostat", 00:29:38.775 "bdev_get_iostat", 00:29:38.775 "bdev_examine", 00:29:38.775 "bdev_wait_for_examine", 00:29:38.775 "bdev_set_options", 00:29:38.775 "accel_get_stats", 00:29:38.775 "accel_set_options", 00:29:38.775 "accel_set_driver", 00:29:38.775 "accel_crypto_key_destroy", 00:29:38.775 "accel_crypto_keys_get", 00:29:38.775 "accel_crypto_key_create", 00:29:38.775 "accel_assign_opc", 00:29:38.775 "accel_get_module_info", 00:29:38.775 "accel_get_opc_assignments", 00:29:38.775 "vmd_rescan", 00:29:38.775 "vmd_remove_device", 00:29:38.775 "vmd_enable", 00:29:38.775 "sock_get_default_impl", 00:29:38.775 "sock_set_default_impl", 00:29:38.775 "sock_impl_set_options", 00:29:38.775 "sock_impl_get_options", 00:29:38.775 "iobuf_get_stats", 00:29:38.775 "iobuf_set_options", 00:29:38.775 "keyring_get_keys", 00:29:38.775 "framework_get_pci_devices", 00:29:38.775 "framework_get_config", 00:29:38.775 "framework_get_subsystems", 00:29:38.775 "fsdev_set_opts", 00:29:38.775 "fsdev_get_opts", 00:29:38.775 "trace_get_info", 00:29:38.775 "trace_get_tpoint_group_mask", 00:29:38.775 "trace_disable_tpoint_group", 00:29:38.775 "trace_enable_tpoint_group", 00:29:38.775 "trace_clear_tpoint_mask", 00:29:38.775 "trace_set_tpoint_mask", 00:29:38.775 "notify_get_notifications", 00:29:38.775 "notify_get_types", 00:29:38.775 "spdk_get_version", 00:29:38.775 "rpc_get_methods" 00:29:38.775 ] 00:29:38.775 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:29:38.775 23:11:19 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:38.775 23:11:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:38.775 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:38.775 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57722 00:29:38.775 23:11:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57722 ']' 00:29:38.775 23:11:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57722 00:29:38.776 23:11:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:29:39.034 23:11:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.034 23:11:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57722 00:29:39.034 killing process with pid 57722 00:29:39.034 23:11:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:39.034 23:11:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:39.034 23:11:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57722' 00:29:39.034 23:11:19 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57722 00:29:39.034 23:11:19 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57722 00:29:41.577 ************************************ 00:29:41.577 END TEST spdkcli_tcp 00:29:41.577 ************************************ 00:29:41.577 00:29:41.577 real 0m4.372s 00:29:41.577 user 0m7.785s 00:29:41.577 sys 0m0.678s 00:29:41.577 23:11:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.577 23:11:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.577 23:11:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:29:41.577 23:11:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:41.577 23:11:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.577 23:11:21 -- common/autotest_common.sh@10 -- # set +x 00:29:41.577 ************************************ 00:29:41.577 START TEST dpdk_mem_utility 00:29:41.577 ************************************ 00:29:41.577 23:11:21 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:29:41.577 * Looking for test storage... 00:29:41.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:29:41.577 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.577 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.577 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.577 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.577 23:11:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:29:41.835 23:11:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.835 23:11:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:29:41.836 23:11:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:29:41.836 23:11:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.836 23:11:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:29:41.836 23:11:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.836 23:11:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.836 23:11:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.836 23:11:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.836 --rc genhtml_branch_coverage=1 00:29:41.836 --rc genhtml_function_coverage=1 00:29:41.836 --rc genhtml_legend=1 00:29:41.836 --rc geninfo_all_blocks=1 00:29:41.836 --rc geninfo_unexecuted_blocks=1 00:29:41.836 00:29:41.836 ' 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.836 --rc genhtml_branch_coverage=1 00:29:41.836 --rc genhtml_function_coverage=1 00:29:41.836 --rc genhtml_legend=1 00:29:41.836 --rc geninfo_all_blocks=1 00:29:41.836 --rc geninfo_unexecuted_blocks=1 00:29:41.836 00:29:41.836 ' 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.836 --rc genhtml_branch_coverage=1 00:29:41.836 --rc genhtml_function_coverage=1 00:29:41.836 --rc genhtml_legend=1 00:29:41.836 --rc geninfo_all_blocks=1 00:29:41.836 --rc geninfo_unexecuted_blocks=1 00:29:41.836 00:29:41.836 ' 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.836 --rc genhtml_branch_coverage=1 00:29:41.836 --rc genhtml_function_coverage=1 00:29:41.836 --rc genhtml_legend=1 00:29:41.836 --rc geninfo_all_blocks=1 00:29:41.836 --rc geninfo_unexecuted_blocks=1 00:29:41.836 00:29:41.836 ' 00:29:41.836 23:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:29:41.836 23:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57850 00:29:41.836 23:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57850 00:29:41.836 23:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57850 ']' 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.836 23:11:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:29:41.836 [2024-12-09 23:11:22.337064] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:41.836 [2024-12-09 23:11:22.337539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57850 ] 00:29:42.094 [2024-12-09 23:11:22.523912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.094 [2024-12-09 23:11:22.649757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.025 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.025 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:29:43.025 23:11:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:29:43.025 23:11:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:29:43.025 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.025 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:29:43.025 { 00:29:43.025 "filename": "/tmp/spdk_mem_dump.txt" 00:29:43.025 } 00:29:43.025 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.025 23:11:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:29:43.025 DPDK memory size 824.000000 MiB in 1 heap(s) 00:29:43.025 1 heaps totaling size 824.000000 MiB 00:29:43.025 size: 824.000000 MiB heap id: 0 00:29:43.025 end heaps---------- 00:29:43.025 9 mempools totaling size 603.782043 MiB 00:29:43.025 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:29:43.025 size: 158.602051 MiB name: PDU_data_out_Pool 00:29:43.025 size: 100.555481 MiB name: bdev_io_57850 00:29:43.025 size: 50.003479 MiB name: msgpool_57850 00:29:43.025 size: 36.509338 MiB name: fsdev_io_57850 00:29:43.025 size: 21.763794 MiB name: PDU_Pool 00:29:43.025 size: 19.513306 MiB name: SCSI_TASK_Pool 00:29:43.025 size: 4.133484 MiB name: evtpool_57850 00:29:43.025 size: 0.026123 MiB name: Session_Pool 00:29:43.025 end mempools------- 00:29:43.025 6 memzones totaling size 4.142822 MiB 00:29:43.025 size: 1.000366 MiB name: RG_ring_0_57850 00:29:43.025 size: 1.000366 MiB name: RG_ring_1_57850 00:29:43.025 size: 1.000366 MiB name: RG_ring_4_57850 00:29:43.025 size: 1.000366 MiB name: RG_ring_5_57850 00:29:43.025 size: 0.125366 MiB name: RG_ring_2_57850 00:29:43.025 size: 0.015991 MiB name: RG_ring_3_57850 00:29:43.025 end memzones------- 00:29:43.025 23:11:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:29:43.284 heap id: 0 total size: 824.000000 MiB number of busy elements: 325 number of free elements: 18 00:29:43.284 list of free elements. size: 16.778931 MiB 00:29:43.284 element at address: 0x200006400000 with size: 1.995972 MiB 00:29:43.284 element at address: 0x20000a600000 with size: 1.995972 MiB 00:29:43.284 element at address: 0x200003e00000 with size: 1.991028 MiB 00:29:43.284 element at address: 0x200019500040 with size: 0.999939 MiB 00:29:43.284 element at address: 0x200019900040 with size: 0.999939 MiB 00:29:43.284 element at address: 0x200019a00000 with size: 0.999084 MiB 00:29:43.284 element at address: 0x200032600000 with size: 0.994324 MiB 00:29:43.284 element at address: 0x200000400000 with size: 0.992004 MiB 00:29:43.284 element at address: 0x200019200000 with size: 0.959656 MiB 00:29:43.284 element at address: 0x200019d00040 with size: 0.936401 MiB 00:29:43.284 element at address: 0x200000200000 with size: 0.716980 MiB 00:29:43.284 element at address: 0x20001b400000 with size: 0.560486 MiB 00:29:43.284 element at address: 0x200000c00000 with size: 0.489197 MiB 00:29:43.284 element at address: 0x200019600000 with size: 0.487976 MiB 00:29:43.284 element at address: 0x200019e00000 with size: 0.485413 MiB 00:29:43.284 element at address: 0x200012c00000 with size: 0.433228 MiB 00:29:43.284 element at address: 0x200028800000 with size: 0.390442 MiB 00:29:43.284 element at address: 0x200000800000 with size: 0.350891 MiB 00:29:43.284 list of standard malloc elements. size: 199.290161 MiB 00:29:43.284 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:29:43.284 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:29:43.284 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:29:43.284 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:29:43.284 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:29:43.284 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:29:43.284 element at address: 0x200019deff40 with size: 0.062683 MiB 00:29:43.284 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:29:43.284 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:29:43.284 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:29:43.284 element at address: 0x200012bff040 with size: 0.000305 MiB 00:29:43.284 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:29:43.284 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:29:43.284 element at address: 0x200000cff000 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:29:43.284 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bff180 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bff280 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bff380 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bff480 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bff580 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bff680 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bff780 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bff880 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bff980 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200019affc40 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200028863f40 with size: 0.000244 MiB 00:29:43.285 element at address: 0x200028864040 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886af80 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886b080 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886b180 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886b280 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886b380 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886b480 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886b580 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886b680 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886b780 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886b880 with size: 0.000244 MiB 00:29:43.285 element at address: 0x20002886b980 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886be80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886c080 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886c180 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886c280 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886c380 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886c480 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886c580 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886c680 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886c780 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886c880 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886c980 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886d080 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886d180 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886d280 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886d380 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886d480 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886d580 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886d680 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886d780 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886d880 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886d980 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886da80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886db80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886de80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886df80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886e080 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886e180 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886e280 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886e380 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886e480 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886e580 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886e680 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886e780 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886e880 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886e980 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886f080 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886f180 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886f280 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886f380 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886f480 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886f580 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886f680 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886f780 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886f880 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886f980 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:29:43.286 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:29:43.286 list of memzone associated elements. size: 607.930908 MiB 00:29:43.286 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:29:43.286 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:29:43.286 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:29:43.286 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:29:43.286 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:29:43.286 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57850_0 00:29:43.286 element at address: 0x200000dff340 with size: 48.003113 MiB 00:29:43.286 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57850_0 00:29:43.286 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:29:43.286 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57850_0 00:29:43.286 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:29:43.286 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:29:43.286 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:29:43.286 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:29:43.286 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:29:43.286 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57850_0 00:29:43.286 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:29:43.286 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57850 00:29:43.286 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:29:43.286 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57850 00:29:43.286 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:29:43.286 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:29:43.286 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:29:43.286 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:29:43.286 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:29:43.286 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:29:43.286 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:29:43.286 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:29:43.286 element at address: 0x200000cff100 with size: 1.000549 MiB 00:29:43.286 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57850 00:29:43.286 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:29:43.286 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57850 00:29:43.286 element at address: 0x200019affd40 with size: 1.000549 MiB 00:29:43.286 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57850 00:29:43.286 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:29:43.286 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57850 00:29:43.286 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:29:43.286 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57850 00:29:43.286 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:29:43.286 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57850 00:29:43.286 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:29:43.286 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:29:43.286 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:29:43.286 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:29:43.286 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:29:43.286 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:29:43.286 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:29:43.286 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57850 00:29:43.286 element at address: 0x20000085df80 with size: 0.125549 MiB 00:29:43.286 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57850 00:29:43.286 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:29:43.286 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:29:43.286 element at address: 0x200028864140 with size: 0.023804 MiB 00:29:43.286 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:29:43.286 element at address: 0x200000859d40 with size: 0.016174 MiB 00:29:43.286 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57850 00:29:43.286 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:29:43.286 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:29:43.286 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:29:43.286 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57850 00:29:43.286 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:29:43.286 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57850 00:29:43.286 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:29:43.286 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57850 00:29:43.286 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:29:43.286 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:29:43.286 23:11:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:29:43.286 23:11:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57850 00:29:43.286 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57850 ']' 00:29:43.286 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57850 00:29:43.286 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:29:43.286 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.286 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57850 00:29:43.286 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.286 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.286 killing process with pid 57850 00:29:43.286 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57850' 00:29:43.286 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57850 00:29:43.286 23:11:23 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57850 00:29:45.814 00:29:45.814 real 0m4.165s 00:29:45.814 user 0m4.076s 00:29:45.814 sys 0m0.604s 00:29:45.814 23:11:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.814 ************************************ 00:29:45.814 END TEST dpdk_mem_utility 00:29:45.814 23:11:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:29:45.814 ************************************ 00:29:45.814 23:11:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:29:45.814 23:11:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:45.814 23:11:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.814 23:11:26 -- common/autotest_common.sh@10 -- # set +x 00:29:45.814 ************************************ 00:29:45.814 START TEST event 00:29:45.814 ************************************ 00:29:45.814 23:11:26 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:29:45.814 * Looking for test storage... 00:29:45.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:29:45.814 23:11:26 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:45.814 23:11:26 event -- common/autotest_common.sh@1711 -- # lcov --version 00:29:45.814 23:11:26 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:45.814 23:11:26 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:45.814 23:11:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.814 23:11:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.814 23:11:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.814 23:11:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.814 23:11:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.814 23:11:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.814 23:11:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.814 23:11:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.814 23:11:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.814 23:11:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.814 23:11:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.814 23:11:26 event -- scripts/common.sh@344 -- # case "$op" in 00:29:45.814 23:11:26 event -- scripts/common.sh@345 -- # : 1 00:29:45.814 23:11:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.814 23:11:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.814 23:11:26 event -- scripts/common.sh@365 -- # decimal 1 00:29:45.814 23:11:26 event -- scripts/common.sh@353 -- # local d=1 00:29:45.814 23:11:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.072 23:11:26 event -- scripts/common.sh@355 -- # echo 1 00:29:46.072 23:11:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.072 23:11:26 event -- scripts/common.sh@366 -- # decimal 2 00:29:46.072 23:11:26 event -- scripts/common.sh@353 -- # local d=2 00:29:46.072 23:11:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.072 23:11:26 event -- scripts/common.sh@355 -- # echo 2 00:29:46.072 23:11:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.072 23:11:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.072 23:11:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.072 23:11:26 event -- scripts/common.sh@368 -- # return 0 00:29:46.072 23:11:26 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.072 23:11:26 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:46.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.072 --rc genhtml_branch_coverage=1 00:29:46.072 --rc genhtml_function_coverage=1 00:29:46.072 --rc genhtml_legend=1 00:29:46.072 --rc geninfo_all_blocks=1 00:29:46.072 --rc geninfo_unexecuted_blocks=1 00:29:46.072 00:29:46.072 ' 00:29:46.072 23:11:26 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:46.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.072 --rc genhtml_branch_coverage=1 00:29:46.072 --rc genhtml_function_coverage=1 00:29:46.072 --rc genhtml_legend=1 00:29:46.072 --rc geninfo_all_blocks=1 00:29:46.072 --rc geninfo_unexecuted_blocks=1 00:29:46.072 00:29:46.072 ' 00:29:46.072 23:11:26 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:46.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.072 --rc genhtml_branch_coverage=1 00:29:46.072 --rc genhtml_function_coverage=1 00:29:46.072 --rc genhtml_legend=1 00:29:46.072 --rc geninfo_all_blocks=1 00:29:46.072 --rc geninfo_unexecuted_blocks=1 00:29:46.072 00:29:46.072 ' 00:29:46.072 23:11:26 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:46.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.072 --rc genhtml_branch_coverage=1 00:29:46.072 --rc genhtml_function_coverage=1 00:29:46.072 --rc genhtml_legend=1 00:29:46.072 --rc geninfo_all_blocks=1 00:29:46.072 --rc geninfo_unexecuted_blocks=1 00:29:46.072 00:29:46.072 ' 00:29:46.072 23:11:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:46.072 23:11:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:29:46.072 23:11:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:29:46.072 23:11:26 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:46.072 23:11:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.072 23:11:26 event -- common/autotest_common.sh@10 -- # set +x 00:29:46.072 ************************************ 00:29:46.072 START TEST event_perf 00:29:46.072 ************************************ 00:29:46.072 23:11:26 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:29:46.072 Running I/O for 1 seconds...[2024-12-09 23:11:26.533441] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:46.072 [2024-12-09 23:11:26.533725] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57958 ] 00:29:46.332 [2024-12-09 23:11:26.719477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:46.332 [2024-12-09 23:11:26.844142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.332 [2024-12-09 23:11:26.844336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.332 [2024-12-09 23:11:26.844470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.332 [2024-12-09 23:11:26.844506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.706 Running I/O for 1 seconds... 00:29:47.706 lcore 0: 205654 00:29:47.706 lcore 1: 205654 00:29:47.706 lcore 2: 205656 00:29:47.706 lcore 3: 205654 00:29:47.706 done. 00:29:47.706 00:29:47.706 real 0m1.619s 00:29:47.706 user 0m4.369s 00:29:47.706 sys 0m0.123s 00:29:47.706 23:11:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.706 23:11:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.706 ************************************ 00:29:47.706 END TEST event_perf 00:29:47.706 ************************************ 00:29:47.706 23:11:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:29:47.706 23:11:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:47.706 23:11:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.706 23:11:28 event -- common/autotest_common.sh@10 -- # set +x 00:29:47.706 ************************************ 00:29:47.706 START TEST event_reactor 00:29:47.706 ************************************ 00:29:47.706 23:11:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:29:47.706 [2024-12-09 23:11:28.222828] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:47.706 [2024-12-09 23:11:28.222958] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57997 ] 00:29:47.963 [2024-12-09 23:11:28.401204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.963 [2024-12-09 23:11:28.523434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.338 test_start 00:29:49.338 oneshot 00:29:49.338 tick 100 00:29:49.338 tick 100 00:29:49.338 tick 250 00:29:49.338 tick 100 00:29:49.338 tick 100 00:29:49.338 tick 100 00:29:49.338 tick 250 00:29:49.338 tick 500 00:29:49.338 tick 100 00:29:49.338 tick 100 00:29:49.338 tick 250 00:29:49.338 tick 100 00:29:49.338 tick 100 00:29:49.338 test_end 00:29:49.338 00:29:49.338 real 0m1.585s 00:29:49.338 user 0m1.369s 00:29:49.338 sys 0m0.107s 00:29:49.338 23:11:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.338 23:11:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:29:49.338 ************************************ 00:29:49.338 END TEST event_reactor 00:29:49.338 ************************************ 00:29:49.338 23:11:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:29:49.338 23:11:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:49.338 23:11:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.338 23:11:29 event -- common/autotest_common.sh@10 -- # set +x 00:29:49.338 ************************************ 00:29:49.338 START TEST event_reactor_perf 00:29:49.338 ************************************ 00:29:49.338 23:11:29 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:29:49.338 [2024-12-09 23:11:29.886417] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:49.338 [2024-12-09 23:11:29.886550] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58034 ] 00:29:49.597 [2024-12-09 23:11:30.067764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.597 [2024-12-09 23:11:30.186720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.974 test_start 00:29:50.974 test_end 00:29:50.974 Performance: 371049 events per second 00:29:50.974 00:29:50.974 real 0m1.583s 00:29:50.974 user 0m1.361s 00:29:50.974 sys 0m0.114s 00:29:50.974 23:11:31 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.974 23:11:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:29:50.974 ************************************ 00:29:50.974 END TEST event_reactor_perf 00:29:50.974 ************************************ 00:29:50.974 23:11:31 event -- event/event.sh@49 -- # uname -s 00:29:50.974 23:11:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:29:50.974 23:11:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:29:50.974 23:11:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:50.974 23:11:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.974 23:11:31 event -- common/autotest_common.sh@10 -- # set +x 00:29:50.974 ************************************ 00:29:50.974 START TEST event_scheduler 00:29:50.974 ************************************ 00:29:50.974 23:11:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:29:51.232 * Looking for test storage... 00:29:51.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:29:51.232 23:11:31 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:51.232 23:11:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:29:51.232 23:11:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:51.232 23:11:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:51.232 23:11:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.232 23:11:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.233 23:11:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:51.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.233 --rc genhtml_branch_coverage=1 00:29:51.233 --rc genhtml_function_coverage=1 00:29:51.233 --rc genhtml_legend=1 00:29:51.233 --rc geninfo_all_blocks=1 00:29:51.233 --rc geninfo_unexecuted_blocks=1 00:29:51.233 00:29:51.233 ' 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:51.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.233 --rc genhtml_branch_coverage=1 00:29:51.233 --rc genhtml_function_coverage=1 00:29:51.233 --rc genhtml_legend=1 00:29:51.233 --rc geninfo_all_blocks=1 00:29:51.233 --rc geninfo_unexecuted_blocks=1 00:29:51.233 00:29:51.233 ' 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:51.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.233 --rc genhtml_branch_coverage=1 00:29:51.233 --rc genhtml_function_coverage=1 00:29:51.233 --rc genhtml_legend=1 00:29:51.233 --rc geninfo_all_blocks=1 00:29:51.233 --rc geninfo_unexecuted_blocks=1 00:29:51.233 00:29:51.233 ' 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:51.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.233 --rc genhtml_branch_coverage=1 00:29:51.233 --rc genhtml_function_coverage=1 00:29:51.233 --rc genhtml_legend=1 00:29:51.233 --rc geninfo_all_blocks=1 00:29:51.233 --rc geninfo_unexecuted_blocks=1 00:29:51.233 00:29:51.233 ' 00:29:51.233 23:11:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:29:51.233 23:11:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58110 00:29:51.233 23:11:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:29:51.233 23:11:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:29:51.233 23:11:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58110 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58110 ']' 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.233 23:11:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:51.233 [2024-12-09 23:11:31.825198] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:51.233 [2024-12-09 23:11:31.825341] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58110 ] 00:29:51.492 [2024-12-09 23:11:32.009170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.750 [2024-12-09 23:11:32.136520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.750 [2024-12-09 23:11:32.136710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.750 [2024-12-09 23:11:32.136887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.750 [2024-12-09 23:11:32.137625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.316 23:11:32 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.316 23:11:32 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:29:52.316 23:11:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:29:52.316 23:11:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.316 23:11:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:52.316 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:52.316 POWER: Cannot set governor of lcore 0 to userspace 00:29:52.316 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:52.316 POWER: Cannot set governor of lcore 0 to performance 00:29:52.316 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:52.316 POWER: Cannot set governor of lcore 0 to userspace 00:29:52.316 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:52.316 POWER: Cannot set governor of lcore 0 to userspace 00:29:52.316 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:29:52.316 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:29:52.316 POWER: Unable to set Power Management Environment for lcore 0 00:29:52.316 [2024-12-09 23:11:32.705795] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:29:52.316 [2024-12-09 23:11:32.705823] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:29:52.316 [2024-12-09 23:11:32.705837] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:29:52.316 [2024-12-09 23:11:32.705864] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:29:52.316 [2024-12-09 23:11:32.705875] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:29:52.316 [2024-12-09 23:11:32.705888] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:29:52.316 23:11:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.316 23:11:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:29:52.316 23:11:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.316 23:11:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 [2024-12-09 23:11:33.049338] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:29:52.575 23:11:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:29:52.575 23:11:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:52.575 23:11:33 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 ************************************ 00:29:52.575 START TEST scheduler_create_thread 00:29:52.575 ************************************ 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 2 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 3 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 4 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 5 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 6 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 7 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 8 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 9 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 10 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.575 23:11:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:53.511 23:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.789 23:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:29:53.789 23:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:29:53.789 23:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.789 23:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:54.732 ************************************ 00:29:54.732 END TEST scheduler_create_thread 00:29:54.732 ************************************ 00:29:54.732 23:11:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.732 00:29:54.732 real 0m2.142s 00:29:54.732 user 0m0.028s 00:29:54.732 sys 0m0.006s 00:29:54.732 23:11:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.732 23:11:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:54.732 23:11:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:54.732 23:11:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58110 00:29:54.732 23:11:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58110 ']' 00:29:54.732 23:11:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58110 00:29:54.732 23:11:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:29:54.732 23:11:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.732 23:11:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58110 00:29:54.732 killing process with pid 58110 00:29:54.732 23:11:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:54.732 23:11:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:54.732 23:11:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58110' 00:29:54.732 23:11:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58110 00:29:54.732 23:11:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58110 00:29:55.304 [2024-12-09 23:11:35.686922] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:29:56.676 00:29:56.676 real 0m5.406s 00:29:56.676 user 0m8.991s 00:29:56.676 sys 0m0.544s 00:29:56.676 23:11:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:56.676 ************************************ 00:29:56.676 END TEST event_scheduler 00:29:56.676 ************************************ 00:29:56.676 23:11:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:56.676 23:11:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:29:56.676 23:11:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:29:56.676 23:11:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:56.676 23:11:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:56.676 23:11:36 event -- common/autotest_common.sh@10 -- # set +x 00:29:56.676 ************************************ 00:29:56.676 START TEST app_repeat 00:29:56.676 ************************************ 00:29:56.676 23:11:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58216 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58216' 00:29:56.676 Process app_repeat pid: 58216 00:29:56.676 spdk_app_start Round 0 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:29:56.676 23:11:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58216 /var/tmp/spdk-nbd.sock 00:29:56.676 23:11:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58216 ']' 00:29:56.676 23:11:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:56.676 23:11:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.676 23:11:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:56.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:56.676 23:11:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.676 23:11:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:56.676 [2024-12-09 23:11:37.057051] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:29:56.676 [2024-12-09 23:11:37.057171] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58216 ] 00:29:56.676 [2024-12-09 23:11:37.232406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:56.934 [2024-12-09 23:11:37.356490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.934 [2024-12-09 23:11:37.356522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.500 23:11:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.500 23:11:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:29:57.500 23:11:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:57.759 Malloc0 00:29:57.759 23:11:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:58.018 Malloc1 00:29:58.018 23:11:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:58.018 23:11:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.018 23:11:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:58.018 23:11:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:58.018 23:11:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:58.018 23:11:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:58.019 23:11:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:58.019 23:11:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.019 23:11:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:58.019 23:11:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:58.019 23:11:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:58.019 23:11:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:58.019 23:11:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:29:58.019 23:11:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:58.019 23:11:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:58.019 23:11:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:29:58.278 /dev/nbd0 00:29:58.278 23:11:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:58.278 23:11:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:58.278 1+0 records in 00:29:58.278 1+0 records out 00:29:58.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348515 s, 11.8 MB/s 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:58.278 23:11:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:29:58.278 23:11:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:58.278 23:11:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:58.278 23:11:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:29:58.537 /dev/nbd1 00:29:58.537 23:11:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:58.537 23:11:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:58.537 1+0 records in 00:29:58.537 1+0 records out 00:29:58.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468832 s, 8.7 MB/s 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:58.537 23:11:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:29:58.537 23:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:58.537 23:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:58.537 23:11:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:58.537 23:11:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.537 23:11:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:58.795 23:11:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:58.795 { 00:29:58.795 "nbd_device": "/dev/nbd0", 00:29:58.795 "bdev_name": "Malloc0" 00:29:58.795 }, 00:29:58.795 { 00:29:58.795 "nbd_device": "/dev/nbd1", 00:29:58.795 "bdev_name": "Malloc1" 00:29:58.795 } 00:29:58.795 ]' 00:29:58.795 23:11:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:58.795 { 00:29:58.795 "nbd_device": "/dev/nbd0", 00:29:58.795 "bdev_name": "Malloc0" 00:29:58.795 }, 00:29:58.795 { 00:29:58.795 "nbd_device": "/dev/nbd1", 00:29:58.795 "bdev_name": "Malloc1" 00:29:58.795 } 00:29:58.795 ]' 00:29:58.795 23:11:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:58.795 23:11:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:58.795 /dev/nbd1' 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:59.053 /dev/nbd1' 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:29:59.053 256+0 records in 00:29:59.053 256+0 records out 00:29:59.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124205 s, 84.4 MB/s 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:59.053 256+0 records in 00:29:59.053 256+0 records out 00:29:59.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285994 s, 36.7 MB/s 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:59.053 256+0 records in 00:29:59.053 256+0 records out 00:29:59.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0341552 s, 30.7 MB/s 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:29:59.053 23:11:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:59.054 23:11:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.054 23:11:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:59.054 23:11:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:59.054 23:11:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:29:59.054 23:11:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:59.054 23:11:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:59.311 23:11:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:59.311 23:11:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:59.311 23:11:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:59.311 23:11:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:59.311 23:11:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:59.311 23:11:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:59.311 23:11:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:59.311 23:11:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:59.311 23:11:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:59.311 23:11:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.569 23:11:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:59.828 23:11:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:29:59.828 23:11:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:30:00.394 23:11:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:30:01.850 [2024-12-09 23:11:42.170767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:01.850 [2024-12-09 23:11:42.292186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.850 [2024-12-09 23:11:42.292190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.109 [2024-12-09 23:11:42.514518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:30:02.109 [2024-12-09 23:11:42.514629] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:30:03.484 spdk_app_start Round 1 00:30:03.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:03.484 23:11:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:30:03.484 23:11:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:30:03.484 23:11:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58216 /var/tmp/spdk-nbd.sock 00:30:03.484 23:11:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58216 ']' 00:30:03.484 23:11:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:03.484 23:11:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.484 23:11:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:03.484 23:11:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.484 23:11:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:03.743 23:11:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.743 23:11:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:03.743 23:11:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:04.002 Malloc0 00:30:04.002 23:11:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:04.261 Malloc1 00:30:04.261 23:11:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:04.261 23:11:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:30:04.522 /dev/nbd0 00:30:04.522 23:11:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:04.522 23:11:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:04.522 23:11:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:04.522 23:11:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:04.522 23:11:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:04.522 23:11:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:04.522 23:11:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:04.522 23:11:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:04.522 23:11:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:04.522 23:11:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:04.522 23:11:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:04.522 1+0 records in 00:30:04.522 1+0 records out 00:30:04.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381081 s, 10.7 MB/s 00:30:04.522 23:11:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:04.781 23:11:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:04.781 23:11:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:04.781 23:11:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:04.781 23:11:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:04.781 23:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:04.781 23:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:04.781 23:11:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:30:04.781 /dev/nbd1 00:30:04.781 23:11:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:05.039 23:11:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:05.039 1+0 records in 00:30:05.039 1+0 records out 00:30:05.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436846 s, 9.4 MB/s 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:05.039 23:11:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:05.039 23:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:05.039 23:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:05.039 23:11:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:05.039 23:11:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:05.039 23:11:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:05.297 { 00:30:05.297 "nbd_device": "/dev/nbd0", 00:30:05.297 "bdev_name": "Malloc0" 00:30:05.297 }, 00:30:05.297 { 00:30:05.297 "nbd_device": "/dev/nbd1", 00:30:05.297 "bdev_name": "Malloc1" 00:30:05.297 } 00:30:05.297 ]' 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:05.297 { 00:30:05.297 "nbd_device": "/dev/nbd0", 00:30:05.297 "bdev_name": "Malloc0" 00:30:05.297 }, 00:30:05.297 { 00:30:05.297 "nbd_device": "/dev/nbd1", 00:30:05.297 "bdev_name": "Malloc1" 00:30:05.297 } 00:30:05.297 ]' 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:05.297 /dev/nbd1' 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:05.297 /dev/nbd1' 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:05.297 23:11:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:30:05.298 256+0 records in 00:30:05.298 256+0 records out 00:30:05.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00708983 s, 148 MB/s 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:05.298 256+0 records in 00:30:05.298 256+0 records out 00:30:05.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330271 s, 31.7 MB/s 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:05.298 256+0 records in 00:30:05.298 256+0 records out 00:30:05.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0395987 s, 26.5 MB/s 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:05.298 23:11:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:05.574 23:11:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:05.574 23:11:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:05.574 23:11:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:05.574 23:11:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:05.574 23:11:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:05.574 23:11:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:05.574 23:11:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:05.574 23:11:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:05.574 23:11:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:05.574 23:11:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:05.833 23:11:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:06.091 23:11:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:30:06.091 23:11:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:30:06.660 23:11:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:30:08.036 [2024-12-09 23:11:48.406740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:08.036 [2024-12-09 23:11:48.532313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.036 [2024-12-09 23:11:48.532327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.295 [2024-12-09 23:11:48.742873] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:30:08.295 [2024-12-09 23:11:48.742967] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:30:09.702 spdk_app_start Round 2 00:30:09.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:09.702 23:11:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:30:09.702 23:11:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:30:09.702 23:11:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58216 /var/tmp/spdk-nbd.sock 00:30:09.702 23:11:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58216 ']' 00:30:09.702 23:11:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:09.702 23:11:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.702 23:11:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:09.702 23:11:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.702 23:11:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:09.975 23:11:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.975 23:11:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:09.975 23:11:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:10.234 Malloc0 00:30:10.234 23:11:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:10.493 Malloc1 00:30:10.493 23:11:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:10.493 23:11:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:30:10.751 /dev/nbd0 00:30:10.751 23:11:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:10.751 23:11:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:10.751 1+0 records in 00:30:10.751 1+0 records out 00:30:10.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283863 s, 14.4 MB/s 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:10.751 23:11:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:10.751 23:11:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:10.751 23:11:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:10.751 23:11:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:30:11.011 /dev/nbd1 00:30:11.011 23:11:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:11.011 23:11:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:11.011 1+0 records in 00:30:11.011 1+0 records out 00:30:11.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398099 s, 10.3 MB/s 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:11.011 23:11:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:11.011 23:11:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:11.011 23:11:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:11.011 23:11:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:11.011 23:11:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:11.011 23:11:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:11.270 { 00:30:11.270 "nbd_device": "/dev/nbd0", 00:30:11.270 "bdev_name": "Malloc0" 00:30:11.270 }, 00:30:11.270 { 00:30:11.270 "nbd_device": "/dev/nbd1", 00:30:11.270 "bdev_name": "Malloc1" 00:30:11.270 } 00:30:11.270 ]' 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:11.270 { 00:30:11.270 "nbd_device": "/dev/nbd0", 00:30:11.270 "bdev_name": "Malloc0" 00:30:11.270 }, 00:30:11.270 { 00:30:11.270 "nbd_device": "/dev/nbd1", 00:30:11.270 "bdev_name": "Malloc1" 00:30:11.270 } 00:30:11.270 ]' 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:11.270 /dev/nbd1' 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:11.270 /dev/nbd1' 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:30:11.270 256+0 records in 00:30:11.270 256+0 records out 00:30:11.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120366 s, 87.1 MB/s 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:11.270 23:11:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:11.528 256+0 records in 00:30:11.528 256+0 records out 00:30:11.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029748 s, 35.2 MB/s 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:11.528 256+0 records in 00:30:11.528 256+0 records out 00:30:11.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337031 s, 31.1 MB/s 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:11.528 23:11:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:11.787 23:11:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:11.787 23:11:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:11.787 23:11:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:11.787 23:11:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:11.787 23:11:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:11.787 23:11:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:11.787 23:11:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:11.787 23:11:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:11.787 23:11:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:11.787 23:11:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:12.044 23:11:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:12.303 23:11:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:30:12.303 23:11:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:30:12.871 23:11:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:30:13.820 [2024-12-09 23:11:54.444864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:14.079 [2024-12-09 23:11:54.563013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.079 [2024-12-09 23:11:54.563019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.337 [2024-12-09 23:11:54.763577] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:30:14.337 [2024-12-09 23:11:54.763690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:30:15.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:15.714 23:11:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58216 /var/tmp/spdk-nbd.sock 00:30:15.714 23:11:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58216 ']' 00:30:15.714 23:11:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:15.714 23:11:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.714 23:11:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:15.714 23:11:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.714 23:11:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:15.972 23:11:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.972 23:11:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:15.972 23:11:56 event.app_repeat -- event/event.sh@39 -- # killprocess 58216 00:30:15.972 23:11:56 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58216 ']' 00:30:15.972 23:11:56 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58216 00:30:15.972 23:11:56 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:30:15.972 23:11:56 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.972 23:11:56 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58216 00:30:16.259 killing process with pid 58216 00:30:16.259 23:11:56 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:16.259 23:11:56 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:16.259 23:11:56 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58216' 00:30:16.259 23:11:56 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58216 00:30:16.259 23:11:56 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58216 00:30:17.194 spdk_app_start is called in Round 0. 00:30:17.194 Shutdown signal received, stop current app iteration 00:30:17.194 Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 reinitialization... 00:30:17.194 spdk_app_start is called in Round 1. 00:30:17.194 Shutdown signal received, stop current app iteration 00:30:17.194 Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 reinitialization... 00:30:17.194 spdk_app_start is called in Round 2. 00:30:17.194 Shutdown signal received, stop current app iteration 00:30:17.194 Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 reinitialization... 00:30:17.194 spdk_app_start is called in Round 3. 00:30:17.194 Shutdown signal received, stop current app iteration 00:30:17.194 23:11:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:30:17.194 23:11:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:30:17.194 00:30:17.194 real 0m20.777s 00:30:17.194 user 0m44.624s 00:30:17.194 sys 0m3.470s 00:30:17.194 ************************************ 00:30:17.194 END TEST app_repeat 00:30:17.194 ************************************ 00:30:17.194 23:11:57 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.194 23:11:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:17.194 23:11:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:30:17.194 23:11:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:30:17.194 23:11:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:17.194 23:11:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.194 23:11:57 event -- common/autotest_common.sh@10 -- # set +x 00:30:17.453 ************************************ 00:30:17.453 START TEST cpu_locks 00:30:17.453 ************************************ 00:30:17.453 23:11:57 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:30:17.453 * Looking for test storage... 00:30:17.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:30:17.453 23:11:57 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:17.453 23:11:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:30:17.453 23:11:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:17.453 23:11:58 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.453 23:11:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:30:17.454 23:11:58 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.454 23:11:58 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:17.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.454 --rc genhtml_branch_coverage=1 00:30:17.454 --rc genhtml_function_coverage=1 00:30:17.454 --rc genhtml_legend=1 00:30:17.454 --rc geninfo_all_blocks=1 00:30:17.454 --rc geninfo_unexecuted_blocks=1 00:30:17.454 00:30:17.454 ' 00:30:17.454 23:11:58 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:17.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.454 --rc genhtml_branch_coverage=1 00:30:17.454 --rc genhtml_function_coverage=1 00:30:17.454 --rc genhtml_legend=1 00:30:17.454 --rc geninfo_all_blocks=1 00:30:17.454 --rc geninfo_unexecuted_blocks=1 00:30:17.454 00:30:17.454 ' 00:30:17.454 23:11:58 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:17.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.454 --rc genhtml_branch_coverage=1 00:30:17.454 --rc genhtml_function_coverage=1 00:30:17.454 --rc genhtml_legend=1 00:30:17.454 --rc geninfo_all_blocks=1 00:30:17.454 --rc geninfo_unexecuted_blocks=1 00:30:17.454 00:30:17.454 ' 00:30:17.454 23:11:58 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:17.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.454 --rc genhtml_branch_coverage=1 00:30:17.454 --rc genhtml_function_coverage=1 00:30:17.454 --rc genhtml_legend=1 00:30:17.454 --rc geninfo_all_blocks=1 00:30:17.454 --rc geninfo_unexecuted_blocks=1 00:30:17.454 00:30:17.454 ' 00:30:17.454 23:11:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:30:17.454 23:11:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:30:17.454 23:11:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:30:17.454 23:11:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:30:17.454 23:11:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:17.454 23:11:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.454 23:11:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:17.454 ************************************ 00:30:17.454 START TEST default_locks 00:30:17.454 ************************************ 00:30:17.454 23:11:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:30:17.454 23:11:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58674 00:30:17.454 23:11:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:17.454 23:11:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58674 00:30:17.454 23:11:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58674 ']' 00:30:17.454 23:11:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.454 23:11:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.454 23:11:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.454 23:11:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.454 23:11:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:30:17.712 [2024-12-09 23:11:58.197940] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:17.712 [2024-12-09 23:11:58.198178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58674 ] 00:30:17.969 [2024-12-09 23:11:58.397096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.969 [2024-12-09 23:11:58.524623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.908 23:11:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.908 23:11:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:30:18.908 23:11:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58674 00:30:18.908 23:11:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58674 00:30:18.908 23:11:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:19.474 23:11:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58674 00:30:19.474 23:11:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58674 ']' 00:30:19.474 23:11:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58674 00:30:19.474 23:11:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:30:19.474 23:11:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:19.474 23:11:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58674 00:30:19.474 23:11:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:19.474 23:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:19.474 killing process with pid 58674 00:30:19.474 23:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58674' 00:30:19.474 23:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58674 00:30:19.474 23:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58674 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58674 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58674 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58674 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58674 ']' 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:30:22.006 ERROR: process (pid: 58674) is no longer running 00:30:22.006 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58674) - No such process 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:30:22.006 00:30:22.006 real 0m4.403s 00:30:22.006 user 0m4.374s 00:30:22.006 sys 0m0.740s 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:22.006 23:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:30:22.006 ************************************ 00:30:22.006 END TEST default_locks 00:30:22.006 ************************************ 00:30:22.006 23:12:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:30:22.006 23:12:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:22.006 23:12:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:22.006 23:12:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:22.006 ************************************ 00:30:22.006 START TEST default_locks_via_rpc 00:30:22.006 ************************************ 00:30:22.006 23:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:30:22.006 23:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58757 00:30:22.006 23:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58757 00:30:22.006 23:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:22.006 23:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58757 ']' 00:30:22.006 23:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.006 23:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.006 23:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.006 23:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.006 23:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:22.265 [2024-12-09 23:12:02.663373] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:22.265 [2024-12-09 23:12:02.663513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58757 ] 00:30:22.265 [2024-12-09 23:12:02.849773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.524 [2024-12-09 23:12:02.976339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58757 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58757 00:30:23.460 23:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58757 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58757 ']' 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58757 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58757 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.027 killing process with pid 58757 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58757' 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58757 00:30:24.027 23:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58757 00:30:26.560 00:30:26.560 real 0m4.521s 00:30:26.560 user 0m4.507s 00:30:26.560 sys 0m0.754s 00:30:26.560 23:12:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.560 23:12:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:26.560 ************************************ 00:30:26.560 END TEST default_locks_via_rpc 00:30:26.560 ************************************ 00:30:26.560 23:12:07 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:30:26.560 23:12:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:26.560 23:12:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.560 23:12:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:26.560 ************************************ 00:30:26.560 START TEST non_locking_app_on_locked_coremask 00:30:26.560 ************************************ 00:30:26.560 23:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:30:26.560 23:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58836 00:30:26.560 23:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:26.560 23:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58836 /var/tmp/spdk.sock 00:30:26.560 23:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58836 ']' 00:30:26.560 23:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.560 23:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.560 23:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.560 23:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.560 23:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:26.820 [2024-12-09 23:12:07.249571] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:26.820 [2024-12-09 23:12:07.249696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58836 ] 00:30:26.820 [2024-12-09 23:12:07.433271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.081 [2024-12-09 23:12:07.558763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58852 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58852 /var/tmp/spdk2.sock 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58852 ']' 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.017 23:12:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:28.017 [2024-12-09 23:12:08.586914] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:28.017 [2024-12-09 23:12:08.587037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58852 ] 00:30:28.274 [2024-12-09 23:12:08.772715] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:30:28.274 [2024-12-09 23:12:08.772772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.533 [2024-12-09 23:12:09.026128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.062 23:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.062 23:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:31.062 23:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58836 00:30:31.062 23:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58836 00:30:31.062 23:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58836 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58836 ']' 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58836 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58836 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:31.641 killing process with pid 58836 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58836' 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58836 00:30:31.641 23:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58836 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58852 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58852 ']' 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58852 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58852 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:36.918 killing process with pid 58852 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58852' 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58852 00:30:36.918 23:12:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58852 00:30:39.455 00:30:39.455 real 0m12.537s 00:30:39.455 user 0m12.891s 00:30:39.455 sys 0m1.508s 00:30:39.455 23:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.455 23:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:39.455 ************************************ 00:30:39.455 END TEST non_locking_app_on_locked_coremask 00:30:39.455 ************************************ 00:30:39.455 23:12:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:30:39.455 23:12:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:39.455 23:12:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.455 23:12:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:39.455 ************************************ 00:30:39.455 START TEST locking_app_on_unlocked_coremask 00:30:39.455 ************************************ 00:30:39.455 23:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:30:39.455 23:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59014 00:30:39.455 23:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59014 /var/tmp/spdk.sock 00:30:39.455 23:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59014 ']' 00:30:39.455 23:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.455 23:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.455 23:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.455 23:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.455 23:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:39.455 23:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:30:39.456 [2024-12-09 23:12:19.855292] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:39.456 [2024-12-09 23:12:19.855451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59014 ] 00:30:39.456 [2024-12-09 23:12:20.040600] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:30:39.456 [2024-12-09 23:12:20.040667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.715 [2024-12-09 23:12:20.164973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59036 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59036 /var/tmp/spdk2.sock 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59036 ']' 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.674 23:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:40.674 [2024-12-09 23:12:21.185762] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:40.674 [2024-12-09 23:12:21.185896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59036 ] 00:30:40.938 [2024-12-09 23:12:21.372611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.196 [2024-12-09 23:12:21.629420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.729 23:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.729 23:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:43.729 23:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59036 00:30:43.729 23:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59036 00:30:43.729 23:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59014 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59014 ']' 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59014 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59014 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:44.297 killing process with pid 59014 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59014' 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59014 00:30:44.297 23:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59014 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59036 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59036 ']' 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59036 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59036 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:49.585 killing process with pid 59036 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59036' 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59036 00:30:49.585 23:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59036 00:30:51.529 00:30:51.529 real 0m12.238s 00:30:51.529 user 0m12.644s 00:30:51.529 sys 0m1.417s 00:30:51.529 23:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:51.529 ************************************ 00:30:51.529 END TEST locking_app_on_unlocked_coremask 00:30:51.529 ************************************ 00:30:51.529 23:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:51.529 23:12:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:30:51.529 23:12:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:51.529 23:12:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.529 23:12:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:51.530 ************************************ 00:30:51.530 START TEST locking_app_on_locked_coremask 00:30:51.530 ************************************ 00:30:51.530 23:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:30:51.530 23:12:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:51.530 23:12:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59184 00:30:51.530 23:12:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59184 /var/tmp/spdk.sock 00:30:51.530 23:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59184 ']' 00:30:51.530 23:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.530 23:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.530 23:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.530 23:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.530 23:12:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:51.789 [2024-12-09 23:12:32.177018] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:51.789 [2024-12-09 23:12:32.177906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59184 ] 00:30:51.789 [2024-12-09 23:12:32.359941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.052 [2024-12-09 23:12:32.481277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59207 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59207 /var/tmp/spdk2.sock 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59207 /var/tmp/spdk2.sock 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59207 /var/tmp/spdk2.sock 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59207 ']' 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.992 23:12:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:52.992 [2024-12-09 23:12:33.501171] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:52.992 [2024-12-09 23:12:33.501297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59207 ] 00:30:53.251 [2024-12-09 23:12:33.690245] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59184 has claimed it. 00:30:53.251 [2024-12-09 23:12:33.690334] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:30:53.819 ERROR: process (pid: 59207) is no longer running 00:30:53.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59207) - No such process 00:30:53.819 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.819 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:30:53.819 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:30:53.819 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:53.819 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:53.819 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:53.819 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59184 00:30:53.819 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:53.819 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59184 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59184 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59184 ']' 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59184 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59184 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:54.077 killing process with pid 59184 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59184' 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59184 00:30:54.077 23:12:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59184 00:30:56.618 00:30:56.618 real 0m5.056s 00:30:56.618 user 0m5.251s 00:30:56.618 sys 0m0.875s 00:30:56.618 23:12:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.618 23:12:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:56.618 ************************************ 00:30:56.618 END TEST locking_app_on_locked_coremask 00:30:56.618 ************************************ 00:30:56.618 23:12:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:30:56.618 23:12:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:56.618 23:12:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.618 23:12:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:56.618 ************************************ 00:30:56.618 START TEST locking_overlapped_coremask 00:30:56.618 ************************************ 00:30:56.619 23:12:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:30:56.619 23:12:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59275 00:30:56.619 23:12:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:30:56.619 23:12:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59275 /var/tmp/spdk.sock 00:30:56.619 23:12:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59275 ']' 00:30:56.619 23:12:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.619 23:12:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.619 23:12:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.619 23:12:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.619 23:12:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:56.877 [2024-12-09 23:12:37.293738] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:56.877 [2024-12-09 23:12:37.293889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59275 ] 00:30:56.877 [2024-12-09 23:12:37.480926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:57.135 [2024-12-09 23:12:37.611648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.135 [2024-12-09 23:12:37.611701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.135 [2024-12-09 23:12:37.611734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59299 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59299 /var/tmp/spdk2.sock 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59299 /var/tmp/spdk2.sock 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59299 /var/tmp/spdk2.sock 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59299 ']' 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:58.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.070 23:12:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:58.070 [2024-12-09 23:12:38.687002] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:30:58.070 [2024-12-09 23:12:38.687178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59299 ] 00:30:58.327 [2024-12-09 23:12:38.890446] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59275 has claimed it. 00:30:58.327 [2024-12-09 23:12:38.890572] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:30:58.892 ERROR: process (pid: 59299) is no longer running 00:30:58.892 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59299) - No such process 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59275 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59275 ']' 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59275 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59275 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:58.892 killing process with pid 59275 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59275' 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59275 00:30:58.892 23:12:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59275 00:31:01.421 ************************************ 00:31:01.421 END TEST locking_overlapped_coremask 00:31:01.421 ************************************ 00:31:01.421 00:31:01.421 real 0m4.719s 00:31:01.421 user 0m12.817s 00:31:01.421 sys 0m0.664s 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:01.421 23:12:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:31:01.421 23:12:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:01.421 23:12:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.421 23:12:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:31:01.421 ************************************ 00:31:01.421 START TEST locking_overlapped_coremask_via_rpc 00:31:01.421 ************************************ 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59368 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59368 /var/tmp/spdk.sock 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59368 ']' 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.421 23:12:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:01.680 [2024-12-09 23:12:42.069859] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:01.680 [2024-12-09 23:12:42.070004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59368 ] 00:31:01.680 [2024-12-09 23:12:42.245115] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:31:01.680 [2024-12-09 23:12:42.245176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:01.939 [2024-12-09 23:12:42.372956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.939 [2024-12-09 23:12:42.373059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.939 [2024-12-09 23:12:42.373085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59386 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59386 /var/tmp/spdk2.sock 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59386 ']' 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:02.877 23:12:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:31:02.877 [2024-12-09 23:12:43.406381] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:02.877 [2024-12-09 23:12:43.406528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59386 ] 00:31:03.136 [2024-12-09 23:12:43.594094] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:31:03.136 [2024-12-09 23:12:43.594146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:03.393 [2024-12-09 23:12:43.848221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:03.393 [2024-12-09 23:12:43.848265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:03.393 [2024-12-09 23:12:43.848308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.922 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:05.922 [2024-12-09 23:12:46.028641] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59368 has claimed it. 00:31:05.922 request: 00:31:05.922 { 00:31:05.922 "method": "framework_enable_cpumask_locks", 00:31:05.922 "req_id": 1 00:31:05.922 } 00:31:05.922 Got JSON-RPC error response 00:31:05.922 response: 00:31:05.922 { 00:31:05.922 "code": -32603, 00:31:05.923 "message": "Failed to claim CPU core: 2" 00:31:05.923 } 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59368 /var/tmp/spdk.sock 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59368 ']' 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59386 /var/tmp/spdk2.sock 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59386 ']' 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:05.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:31:05.923 00:31:05.923 real 0m4.586s 00:31:05.923 user 0m1.413s 00:31:05.923 sys 0m0.227s 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.923 23:12:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:05.923 ************************************ 00:31:05.923 END TEST locking_overlapped_coremask_via_rpc 00:31:05.923 ************************************ 00:31:06.181 23:12:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:31:06.181 23:12:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59368 ]] 00:31:06.181 23:12:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59368 00:31:06.181 23:12:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59368 ']' 00:31:06.181 23:12:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59368 00:31:06.181 23:12:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:31:06.181 23:12:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.181 23:12:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59368 00:31:06.181 23:12:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:06.181 killing process with pid 59368 00:31:06.181 23:12:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:06.181 23:12:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59368' 00:31:06.181 23:12:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59368 00:31:06.181 23:12:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59368 00:31:08.743 23:12:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59386 ]] 00:31:08.743 23:12:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59386 00:31:08.743 23:12:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59386 ']' 00:31:08.743 23:12:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59386 00:31:08.743 23:12:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:31:08.743 23:12:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.743 23:12:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59386 00:31:08.743 23:12:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:08.743 23:12:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:08.743 killing process with pid 59386 00:31:08.743 23:12:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59386' 00:31:08.743 23:12:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59386 00:31:08.743 23:12:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59386 00:31:11.273 23:12:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:31:11.273 23:12:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:31:11.273 23:12:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59368 ]] 00:31:11.273 23:12:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59368 00:31:11.273 23:12:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59368 ']' 00:31:11.273 Process with pid 59368 is not found 00:31:11.273 Process with pid 59386 is not found 00:31:11.273 23:12:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59368 00:31:11.273 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59368) - No such process 00:31:11.273 23:12:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59368 is not found' 00:31:11.273 23:12:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59386 ]] 00:31:11.273 23:12:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59386 00:31:11.273 23:12:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59386 ']' 00:31:11.273 23:12:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59386 00:31:11.273 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59386) - No such process 00:31:11.273 23:12:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59386 is not found' 00:31:11.273 23:12:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:31:11.273 00:31:11.273 real 0m54.044s 00:31:11.273 user 1m31.982s 00:31:11.273 sys 0m7.516s 00:31:11.273 23:12:51 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.273 23:12:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:31:11.273 ************************************ 00:31:11.273 END TEST cpu_locks 00:31:11.273 ************************************ 00:31:11.531 00:31:11.531 real 1m25.693s 00:31:11.531 user 2m32.954s 00:31:11.531 sys 0m12.299s 00:31:11.531 23:12:51 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.531 23:12:51 event -- common/autotest_common.sh@10 -- # set +x 00:31:11.531 ************************************ 00:31:11.531 END TEST event 00:31:11.531 ************************************ 00:31:11.531 23:12:51 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:31:11.531 23:12:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:11.531 23:12:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.531 23:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:11.531 ************************************ 00:31:11.531 START TEST thread 00:31:11.531 ************************************ 00:31:11.531 23:12:51 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:31:11.531 * Looking for test storage... 00:31:11.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:31:11.531 23:12:52 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:11.531 23:12:52 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:31:11.531 23:12:52 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:11.788 23:12:52 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:11.788 23:12:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:11.788 23:12:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:11.788 23:12:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:11.788 23:12:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:31:11.788 23:12:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:31:11.788 23:12:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:31:11.788 23:12:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:31:11.788 23:12:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:31:11.788 23:12:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:31:11.788 23:12:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:31:11.788 23:12:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:11.788 23:12:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:31:11.788 23:12:52 thread -- scripts/common.sh@345 -- # : 1 00:31:11.788 23:12:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:11.788 23:12:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:11.788 23:12:52 thread -- scripts/common.sh@365 -- # decimal 1 00:31:11.788 23:12:52 thread -- scripts/common.sh@353 -- # local d=1 00:31:11.788 23:12:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:11.788 23:12:52 thread -- scripts/common.sh@355 -- # echo 1 00:31:11.788 23:12:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:31:11.788 23:12:52 thread -- scripts/common.sh@366 -- # decimal 2 00:31:11.788 23:12:52 thread -- scripts/common.sh@353 -- # local d=2 00:31:11.788 23:12:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:11.788 23:12:52 thread -- scripts/common.sh@355 -- # echo 2 00:31:11.788 23:12:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:31:11.788 23:12:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:11.788 23:12:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:11.788 23:12:52 thread -- scripts/common.sh@368 -- # return 0 00:31:11.788 23:12:52 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:11.788 23:12:52 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:11.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.788 --rc genhtml_branch_coverage=1 00:31:11.788 --rc genhtml_function_coverage=1 00:31:11.788 --rc genhtml_legend=1 00:31:11.788 --rc geninfo_all_blocks=1 00:31:11.788 --rc geninfo_unexecuted_blocks=1 00:31:11.788 00:31:11.788 ' 00:31:11.788 23:12:52 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:11.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.788 --rc genhtml_branch_coverage=1 00:31:11.788 --rc genhtml_function_coverage=1 00:31:11.788 --rc genhtml_legend=1 00:31:11.788 --rc geninfo_all_blocks=1 00:31:11.788 --rc geninfo_unexecuted_blocks=1 00:31:11.788 00:31:11.788 ' 00:31:11.788 23:12:52 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:11.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.788 --rc genhtml_branch_coverage=1 00:31:11.788 --rc genhtml_function_coverage=1 00:31:11.788 --rc genhtml_legend=1 00:31:11.788 --rc geninfo_all_blocks=1 00:31:11.788 --rc geninfo_unexecuted_blocks=1 00:31:11.788 00:31:11.788 ' 00:31:11.788 23:12:52 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:11.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.788 --rc genhtml_branch_coverage=1 00:31:11.788 --rc genhtml_function_coverage=1 00:31:11.788 --rc genhtml_legend=1 00:31:11.788 --rc geninfo_all_blocks=1 00:31:11.788 --rc geninfo_unexecuted_blocks=1 00:31:11.788 00:31:11.788 ' 00:31:11.788 23:12:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:31:11.788 23:12:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:31:11.788 23:12:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.788 23:12:52 thread -- common/autotest_common.sh@10 -- # set +x 00:31:11.788 ************************************ 00:31:11.788 START TEST thread_poller_perf 00:31:11.788 ************************************ 00:31:11.788 23:12:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:31:11.788 [2024-12-09 23:12:52.251725] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:11.788 [2024-12-09 23:12:52.252896] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59589 ] 00:31:12.045 [2024-12-09 23:12:52.453888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.045 [2024-12-09 23:12:52.581446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.045 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:31:13.421 [2024-12-09T23:12:54.057Z] ====================================== 00:31:13.421 [2024-12-09T23:12:54.057Z] busy:2504288502 (cyc) 00:31:13.421 [2024-12-09T23:12:54.057Z] total_run_count: 355000 00:31:13.421 [2024-12-09T23:12:54.057Z] tsc_hz: 2490000000 (cyc) 00:31:13.421 [2024-12-09T23:12:54.057Z] ====================================== 00:31:13.421 [2024-12-09T23:12:54.057Z] poller_cost: 7054 (cyc), 2832 (nsec) 00:31:13.421 00:31:13.421 real 0m1.629s 00:31:13.421 user 0m1.416s 00:31:13.421 sys 0m0.103s 00:31:13.421 23:12:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:13.421 23:12:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:31:13.421 ************************************ 00:31:13.421 END TEST thread_poller_perf 00:31:13.421 ************************************ 00:31:13.421 23:12:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:31:13.421 23:12:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:31:13.421 23:12:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:13.421 23:12:53 thread -- common/autotest_common.sh@10 -- # set +x 00:31:13.421 ************************************ 00:31:13.421 START TEST thread_poller_perf 00:31:13.421 ************************************ 00:31:13.421 23:12:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:31:13.421 [2024-12-09 23:12:53.950996] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:13.421 [2024-12-09 23:12:53.951177] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59631 ] 00:31:13.680 [2024-12-09 23:12:54.147798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.680 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:31:13.680 [2024-12-09 23:12:54.288795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.123 [2024-12-09T23:12:55.759Z] ====================================== 00:31:15.123 [2024-12-09T23:12:55.759Z] busy:2494476094 (cyc) 00:31:15.123 [2024-12-09T23:12:55.759Z] total_run_count: 4297000 00:31:15.123 [2024-12-09T23:12:55.759Z] tsc_hz: 2490000000 (cyc) 00:31:15.123 [2024-12-09T23:12:55.759Z] ====================================== 00:31:15.123 [2024-12-09T23:12:55.759Z] poller_cost: 580 (cyc), 232 (nsec) 00:31:15.123 00:31:15.123 real 0m1.641s 00:31:15.123 user 0m1.424s 00:31:15.123 sys 0m0.109s 00:31:15.123 23:12:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.123 23:12:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.123 ************************************ 00:31:15.123 END TEST thread_poller_perf 00:31:15.123 ************************************ 00:31:15.123 23:12:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:31:15.123 ************************************ 00:31:15.123 END TEST thread 00:31:15.123 ************************************ 00:31:15.123 00:31:15.123 real 0m3.598s 00:31:15.123 user 0m3.004s 00:31:15.123 sys 0m0.388s 00:31:15.123 23:12:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.123 23:12:55 thread -- common/autotest_common.sh@10 -- # set +x 00:31:15.123 23:12:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:31:15.123 23:12:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:31:15.123 23:12:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:15.123 23:12:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.123 23:12:55 -- common/autotest_common.sh@10 -- # set +x 00:31:15.123 ************************************ 00:31:15.123 START TEST app_cmdline 00:31:15.123 ************************************ 00:31:15.123 23:12:55 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:31:15.123 * Looking for test storage... 00:31:15.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.404 23:12:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:15.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.404 --rc genhtml_branch_coverage=1 00:31:15.404 --rc genhtml_function_coverage=1 00:31:15.404 --rc genhtml_legend=1 00:31:15.404 --rc geninfo_all_blocks=1 00:31:15.404 --rc geninfo_unexecuted_blocks=1 00:31:15.404 00:31:15.404 ' 00:31:15.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:15.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.404 --rc genhtml_branch_coverage=1 00:31:15.404 --rc genhtml_function_coverage=1 00:31:15.404 --rc genhtml_legend=1 00:31:15.404 --rc geninfo_all_blocks=1 00:31:15.404 --rc geninfo_unexecuted_blocks=1 00:31:15.404 00:31:15.404 ' 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:15.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.404 --rc genhtml_branch_coverage=1 00:31:15.404 --rc genhtml_function_coverage=1 00:31:15.404 --rc genhtml_legend=1 00:31:15.404 --rc geninfo_all_blocks=1 00:31:15.404 --rc geninfo_unexecuted_blocks=1 00:31:15.404 00:31:15.404 ' 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:15.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.404 --rc genhtml_branch_coverage=1 00:31:15.404 --rc genhtml_function_coverage=1 00:31:15.404 --rc genhtml_legend=1 00:31:15.404 --rc geninfo_all_blocks=1 00:31:15.404 --rc geninfo_unexecuted_blocks=1 00:31:15.404 00:31:15.404 ' 00:31:15.404 23:12:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:31:15.404 23:12:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59714 00:31:15.404 23:12:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59714 00:31:15.404 23:12:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59714 ']' 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.404 23:12:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:31:15.404 [2024-12-09 23:12:55.977040] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:15.404 [2024-12-09 23:12:55.977205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59714 ] 00:31:15.662 [2024-12-09 23:12:56.165575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.920 [2024-12-09 23:12:56.303910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.854 23:12:57 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.854 23:12:57 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:31:16.854 23:12:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:31:17.112 { 00:31:17.112 "version": "SPDK v25.01-pre git sha1 c12cb8fe3", 00:31:17.112 "fields": { 00:31:17.112 "major": 25, 00:31:17.112 "minor": 1, 00:31:17.112 "patch": 0, 00:31:17.112 "suffix": "-pre", 00:31:17.112 "commit": "c12cb8fe3" 00:31:17.112 } 00:31:17.112 } 00:31:17.112 23:12:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:31:17.112 23:12:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:31:17.112 23:12:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:31:17.112 23:12:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:31:17.112 23:12:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:31:17.112 23:12:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:31:17.112 23:12:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:31:17.112 23:12:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.113 23:12:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:31:17.113 23:12:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:31:17.113 23:12:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:17.113 23:12:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:31:17.372 request: 00:31:17.372 { 00:31:17.372 "method": "env_dpdk_get_mem_stats", 00:31:17.372 "req_id": 1 00:31:17.372 } 00:31:17.372 Got JSON-RPC error response 00:31:17.372 response: 00:31:17.372 { 00:31:17.372 "code": -32601, 00:31:17.372 "message": "Method not found" 00:31:17.372 } 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:17.372 23:12:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59714 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59714 ']' 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59714 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59714 00:31:17.372 killing process with pid 59714 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59714' 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@973 -- # kill 59714 00:31:17.372 23:12:57 app_cmdline -- common/autotest_common.sh@978 -- # wait 59714 00:31:19.903 00:31:19.903 real 0m4.815s 00:31:19.903 user 0m5.134s 00:31:19.903 sys 0m0.650s 00:31:19.903 23:13:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.903 ************************************ 00:31:19.903 23:13:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:31:19.903 END TEST app_cmdline 00:31:19.903 ************************************ 00:31:19.903 23:13:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:31:19.903 23:13:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:19.903 23:13:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.903 23:13:00 -- common/autotest_common.sh@10 -- # set +x 00:31:19.903 ************************************ 00:31:19.903 START TEST version 00:31:19.903 ************************************ 00:31:19.903 23:13:00 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:31:20.161 * Looking for test storage... 00:31:20.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:31:20.161 23:13:00 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:20.161 23:13:00 version -- common/autotest_common.sh@1711 -- # lcov --version 00:31:20.161 23:13:00 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:20.161 23:13:00 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:20.161 23:13:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.161 23:13:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.161 23:13:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.161 23:13:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.161 23:13:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.161 23:13:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.161 23:13:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.161 23:13:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.161 23:13:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.161 23:13:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.161 23:13:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.161 23:13:00 version -- scripts/common.sh@344 -- # case "$op" in 00:31:20.161 23:13:00 version -- scripts/common.sh@345 -- # : 1 00:31:20.161 23:13:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.161 23:13:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.161 23:13:00 version -- scripts/common.sh@365 -- # decimal 1 00:31:20.161 23:13:00 version -- scripts/common.sh@353 -- # local d=1 00:31:20.161 23:13:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.161 23:13:00 version -- scripts/common.sh@355 -- # echo 1 00:31:20.161 23:13:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.161 23:13:00 version -- scripts/common.sh@366 -- # decimal 2 00:31:20.161 23:13:00 version -- scripts/common.sh@353 -- # local d=2 00:31:20.161 23:13:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.161 23:13:00 version -- scripts/common.sh@355 -- # echo 2 00:31:20.161 23:13:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.161 23:13:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.161 23:13:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.161 23:13:00 version -- scripts/common.sh@368 -- # return 0 00:31:20.161 23:13:00 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.161 23:13:00 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:20.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.161 --rc genhtml_branch_coverage=1 00:31:20.161 --rc genhtml_function_coverage=1 00:31:20.161 --rc genhtml_legend=1 00:31:20.161 --rc geninfo_all_blocks=1 00:31:20.161 --rc geninfo_unexecuted_blocks=1 00:31:20.161 00:31:20.161 ' 00:31:20.161 23:13:00 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:20.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.161 --rc genhtml_branch_coverage=1 00:31:20.161 --rc genhtml_function_coverage=1 00:31:20.161 --rc genhtml_legend=1 00:31:20.161 --rc geninfo_all_blocks=1 00:31:20.161 --rc geninfo_unexecuted_blocks=1 00:31:20.161 00:31:20.161 ' 00:31:20.161 23:13:00 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:20.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.161 --rc genhtml_branch_coverage=1 00:31:20.161 --rc genhtml_function_coverage=1 00:31:20.161 --rc genhtml_legend=1 00:31:20.161 --rc geninfo_all_blocks=1 00:31:20.161 --rc geninfo_unexecuted_blocks=1 00:31:20.161 00:31:20.161 ' 00:31:20.161 23:13:00 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:20.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.161 --rc genhtml_branch_coverage=1 00:31:20.161 --rc genhtml_function_coverage=1 00:31:20.161 --rc genhtml_legend=1 00:31:20.161 --rc geninfo_all_blocks=1 00:31:20.161 --rc geninfo_unexecuted_blocks=1 00:31:20.161 00:31:20.161 ' 00:31:20.161 23:13:00 version -- app/version.sh@17 -- # get_header_version major 00:31:20.161 23:13:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:20.161 23:13:00 version -- app/version.sh@14 -- # cut -f2 00:31:20.161 23:13:00 version -- app/version.sh@14 -- # tr -d '"' 00:31:20.161 23:13:00 version -- app/version.sh@17 -- # major=25 00:31:20.161 23:13:00 version -- app/version.sh@18 -- # get_header_version minor 00:31:20.420 23:13:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:20.420 23:13:00 version -- app/version.sh@14 -- # cut -f2 00:31:20.420 23:13:00 version -- app/version.sh@14 -- # tr -d '"' 00:31:20.420 23:13:00 version -- app/version.sh@18 -- # minor=1 00:31:20.420 23:13:00 version -- app/version.sh@19 -- # get_header_version patch 00:31:20.420 23:13:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:20.420 23:13:00 version -- app/version.sh@14 -- # tr -d '"' 00:31:20.420 23:13:00 version -- app/version.sh@14 -- # cut -f2 00:31:20.420 23:13:00 version -- app/version.sh@19 -- # patch=0 00:31:20.420 23:13:00 version -- app/version.sh@20 -- # get_header_version suffix 00:31:20.420 23:13:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:20.420 23:13:00 version -- app/version.sh@14 -- # cut -f2 00:31:20.420 23:13:00 version -- app/version.sh@14 -- # tr -d '"' 00:31:20.420 23:13:00 version -- app/version.sh@20 -- # suffix=-pre 00:31:20.420 23:13:00 version -- app/version.sh@22 -- # version=25.1 00:31:20.420 23:13:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:31:20.420 23:13:00 version -- app/version.sh@28 -- # version=25.1rc0 00:31:20.420 23:13:00 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:31:20.420 23:13:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:31:20.420 23:13:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:31:20.420 23:13:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:31:20.420 00:31:20.420 real 0m0.336s 00:31:20.420 user 0m0.205s 00:31:20.420 sys 0m0.189s 00:31:20.420 23:13:00 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.420 23:13:00 version -- common/autotest_common.sh@10 -- # set +x 00:31:20.420 ************************************ 00:31:20.420 END TEST version 00:31:20.420 ************************************ 00:31:20.420 23:13:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:31:20.420 23:13:00 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:31:20.420 23:13:00 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:31:20.420 23:13:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:20.420 23:13:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.420 23:13:00 -- common/autotest_common.sh@10 -- # set +x 00:31:20.420 ************************************ 00:31:20.420 START TEST bdev_raid 00:31:20.420 ************************************ 00:31:20.420 23:13:00 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:31:20.420 * Looking for test storage... 00:31:20.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:20.420 23:13:01 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:20.420 23:13:01 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:31:20.420 23:13:01 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:20.679 23:13:01 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@345 -- # : 1 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.679 23:13:01 bdev_raid -- scripts/common.sh@368 -- # return 0 00:31:20.679 23:13:01 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.679 23:13:01 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:20.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.679 --rc genhtml_branch_coverage=1 00:31:20.679 --rc genhtml_function_coverage=1 00:31:20.679 --rc genhtml_legend=1 00:31:20.679 --rc geninfo_all_blocks=1 00:31:20.679 --rc geninfo_unexecuted_blocks=1 00:31:20.679 00:31:20.679 ' 00:31:20.679 23:13:01 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:20.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.679 --rc genhtml_branch_coverage=1 00:31:20.679 --rc genhtml_function_coverage=1 00:31:20.679 --rc genhtml_legend=1 00:31:20.679 --rc geninfo_all_blocks=1 00:31:20.679 --rc geninfo_unexecuted_blocks=1 00:31:20.679 00:31:20.679 ' 00:31:20.679 23:13:01 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:20.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.679 --rc genhtml_branch_coverage=1 00:31:20.679 --rc genhtml_function_coverage=1 00:31:20.679 --rc genhtml_legend=1 00:31:20.679 --rc geninfo_all_blocks=1 00:31:20.679 --rc geninfo_unexecuted_blocks=1 00:31:20.679 00:31:20.679 ' 00:31:20.679 23:13:01 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:20.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.679 --rc genhtml_branch_coverage=1 00:31:20.679 --rc genhtml_function_coverage=1 00:31:20.679 --rc genhtml_legend=1 00:31:20.679 --rc geninfo_all_blocks=1 00:31:20.679 --rc geninfo_unexecuted_blocks=1 00:31:20.679 00:31:20.679 ' 00:31:20.679 23:13:01 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:20.679 23:13:01 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:31:20.679 23:13:01 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:31:20.679 23:13:01 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:31:20.679 23:13:01 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:31:20.679 23:13:01 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:31:20.679 23:13:01 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:31:20.679 23:13:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:20.679 23:13:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.679 23:13:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:20.679 ************************************ 00:31:20.679 START TEST raid1_resize_data_offset_test 00:31:20.679 ************************************ 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59913 00:31:20.679 Process raid pid: 59913 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59913' 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59913 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59913 ']' 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.679 23:13:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.679 [2024-12-09 23:13:01.288192] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:20.679 [2024-12-09 23:13:01.288376] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.938 [2024-12-09 23:13:01.490349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.197 [2024-12-09 23:13:01.622166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.457 [2024-12-09 23:13:01.851916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:21.457 [2024-12-09 23:13:01.851984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.716 malloc0 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.716 malloc1 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.716 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.975 null0 00:31:21.975 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.975 23:13:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:31:21.975 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.975 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.976 [2024-12-09 23:13:02.359218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:31:21.976 [2024-12-09 23:13:02.361387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:21.976 [2024-12-09 23:13:02.361459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:31:21.976 [2024-12-09 23:13:02.361643] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:21.976 [2024-12-09 23:13:02.361662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:31:21.976 [2024-12-09 23:13:02.361962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:21.976 [2024-12-09 23:13:02.362146] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:21.976 [2024-12-09 23:13:02.362162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:31:21.976 [2024-12-09 23:13:02.362349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.976 [2024-12-09 23:13:02.419209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.976 23:13:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.541 malloc2 00:31:22.541 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.541 23:13:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:31:22.541 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.542 [2024-12-09 23:13:03.026892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:22.542 [2024-12-09 23:13:03.045984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.542 [2024-12-09 23:13:03.048232] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59913 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59913 ']' 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59913 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59913 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:22.542 killing process with pid 59913 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59913' 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59913 00:31:22.542 23:13:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59913 00:31:22.542 [2024-12-09 23:13:03.140517] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:22.542 [2024-12-09 23:13:03.140834] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:31:22.542 [2024-12-09 23:13:03.140894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.542 [2024-12-09 23:13:03.140913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:31:22.801 [2024-12-09 23:13:03.180773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:22.801 [2024-12-09 23:13:03.181134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:22.801 [2024-12-09 23:13:03.181161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:31:24.704 [2024-12-09 23:13:05.088989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:26.079 23:13:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:31:26.079 00:31:26.079 real 0m5.142s 00:31:26.079 user 0m5.038s 00:31:26.079 sys 0m0.613s 00:31:26.079 23:13:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.079 23:13:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.079 ************************************ 00:31:26.079 END TEST raid1_resize_data_offset_test 00:31:26.079 ************************************ 00:31:26.079 23:13:06 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:31:26.079 23:13:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:26.079 23:13:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.079 23:13:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:26.079 ************************************ 00:31:26.079 START TEST raid0_resize_superblock_test 00:31:26.079 ************************************ 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59998 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:26.079 Process raid pid: 59998 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59998' 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59998 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59998 ']' 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.079 23:13:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.079 [2024-12-09 23:13:06.480909] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:26.079 [2024-12-09 23:13:06.481087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.079 [2024-12-09 23:13:06.662631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.358 [2024-12-09 23:13:06.791417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.616 [2024-12-09 23:13:07.020342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:26.616 [2024-12-09 23:13:07.020407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:26.874 23:13:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:26.874 23:13:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:31:26.874 23:13:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:31:26.874 23:13:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.874 23:13:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.439 malloc0 00:31:27.439 23:13:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.439 23:13:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:31:27.439 23:13:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.439 23:13:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.439 [2024-12-09 23:13:07.945943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:31:27.439 [2024-12-09 23:13:07.946041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:27.439 [2024-12-09 23:13:07.946094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:27.439 [2024-12-09 23:13:07.946124] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:27.439 [2024-12-09 23:13:07.948974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:27.439 [2024-12-09 23:13:07.949024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:31:27.439 pt0 00:31:27.439 23:13:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.439 23:13:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:31:27.439 23:13:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.439 23:13:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.439 03c92d71-9788-463a-a50b-6a5fd78b0c97 00:31:27.439 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.439 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.440 f75e47f9-c65b-4260-9a27-ecbcc690a843 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.440 142d4831-1779-4c1c-80f7-dedb51f4618e 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.440 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.698 [2024-12-09 23:13:08.078709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f75e47f9-c65b-4260-9a27-ecbcc690a843 is claimed 00:31:27.698 [2024-12-09 23:13:08.078823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 142d4831-1779-4c1c-80f7-dedb51f4618e is claimed 00:31:27.698 [2024-12-09 23:13:08.078986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:27.698 [2024-12-09 23:13:08.079010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:31:27.698 [2024-12-09 23:13:08.079324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:27.698 [2024-12-09 23:13:08.079556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:27.698 [2024-12-09 23:13:08.079574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:31:27.698 [2024-12-09 23:13:08.079762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.698 [2024-12-09 23:13:08.190852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.698 [2024-12-09 23:13:08.238780] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:31:27.698 [2024-12-09 23:13:08.238826] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f75e47f9-c65b-4260-9a27-ecbcc690a843' was resized: old size 131072, new size 204800 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.698 [2024-12-09 23:13:08.246688] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:31:27.698 [2024-12-09 23:13:08.246728] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '142d4831-1779-4c1c-80f7-dedb51f4618e' was resized: old size 131072, new size 204800 00:31:27.698 [2024-12-09 23:13:08.246771] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.698 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:31:27.699 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.699 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.959 [2024-12-09 23:13:08.338693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.959 [2024-12-09 23:13:08.378351] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:31:27.959 [2024-12-09 23:13:08.378507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:31:27.959 [2024-12-09 23:13:08.378538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:27.959 [2024-12-09 23:13:08.378558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:31:27.959 [2024-12-09 23:13:08.378721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:27.959 [2024-12-09 23:13:08.378770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:27.959 [2024-12-09 23:13:08.378788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.959 [2024-12-09 23:13:08.390217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:31:27.959 [2024-12-09 23:13:08.390311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:27.959 [2024-12-09 23:13:08.390348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:31:27.959 [2024-12-09 23:13:08.390370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:27.959 [2024-12-09 23:13:08.393176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:27.959 [2024-12-09 23:13:08.393235] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:31:27.959 pt0 00:31:27.959 [2024-12-09 23:13:08.395254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f75e47f9-c65b-4260-9a27-ecbcc690a843 00:31:27.959 [2024-12-09 23:13:08.395330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f75e47f9-c65b-4260-9a27-ecbcc690a843 is claimed 00:31:27.959 [2024-12-09 23:13:08.395468] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 142d4831-1779-4c1c-80f7-dedb51f4618e 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.959 [2024-12-09 23:13:08.395492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 142d4831-1779-4c1c-80f7-dedb51f4618e is claimed 00:31:27.959 [2024-12-09 23:13:08.395639] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 142d4831-1779-4c1c-80f7-dedb51f4618e (2) smaller than existing raid bdev Raid (3) 00:31:27.959 [2024-12-09 23:13:08.395667] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f75e47f9-c65b-4260-9a27-ecbcc690a843: File exists 00:31:27.959 [2024-12-09 23:13:08.395723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:27.959 [2024-12-09 23:13:08.395746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.959 [2024-12-09 23:13:08.396081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:27.959 [2024-12-09 23:13:08.396256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.959 [2024-12-09 23:13:08.396278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:31:27.959 [2024-12-09 23:13:08.396469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.959 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.959 [2024-12-09 23:13:08.418505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59998 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59998 ']' 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59998 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59998 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:27.960 killing process with pid 59998 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59998' 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59998 00:31:27.960 [2024-12-09 23:13:08.498849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:27.960 [2024-12-09 23:13:08.498952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:27.960 23:13:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59998 00:31:27.960 [2024-12-09 23:13:08.499004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:27.960 [2024-12-09 23:13:08.499016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:31:29.865 [2024-12-09 23:13:10.019444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:30.803 23:13:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:31:30.803 00:31:30.803 real 0m4.869s 00:31:30.803 user 0m5.070s 00:31:30.803 sys 0m0.633s 00:31:30.803 23:13:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.803 23:13:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.803 ************************************ 00:31:30.803 END TEST raid0_resize_superblock_test 00:31:30.803 ************************************ 00:31:30.803 23:13:11 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:31:30.803 23:13:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:30.803 23:13:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.803 23:13:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:30.803 ************************************ 00:31:30.803 START TEST raid1_resize_superblock_test 00:31:30.803 ************************************ 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60101 00:31:30.803 Process raid pid: 60101 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60101' 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60101 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60101 ']' 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.803 23:13:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.803 [2024-12-09 23:13:11.411919] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:30.803 [2024-12-09 23:13:11.412544] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.062 [2024-12-09 23:13:11.584916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.321 [2024-12-09 23:13:11.716739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.321 [2024-12-09 23:13:11.948076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:31.321 [2024-12-09 23:13:11.948136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:31.889 23:13:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.889 23:13:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:31:31.889 23:13:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:31:31.889 23:13:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.889 23:13:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.456 malloc0 00:31:32.456 23:13:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.456 23:13:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:31:32.456 23:13:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.456 23:13:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.456 [2024-12-09 23:13:12.946364] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:31:32.456 [2024-12-09 23:13:12.946453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.456 [2024-12-09 23:13:12.946482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:32.456 [2024-12-09 23:13:12.946503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.456 [2024-12-09 23:13:12.949121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.456 [2024-12-09 23:13:12.949171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:31:32.456 pt0 00:31:32.456 23:13:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.456 23:13:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:31:32.456 23:13:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.456 23:13:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.456 e47592e3-2937-499b-a943-4d570b8cc59c 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.456 4314f0ec-68d4-4efd-8709-69bf83e281dd 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.456 2d10f339-33d7-473b-b6bc-aacedab75c68 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.456 [2024-12-09 23:13:13.075038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4314f0ec-68d4-4efd-8709-69bf83e281dd is claimed 00:31:32.456 [2024-12-09 23:13:13.075162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2d10f339-33d7-473b-b6bc-aacedab75c68 is claimed 00:31:32.456 [2024-12-09 23:13:13.075326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:32.456 [2024-12-09 23:13:13.075348] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:31:32.456 [2024-12-09 23:13:13.075721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:32.456 [2024-12-09 23:13:13.075961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:32.456 [2024-12-09 23:13:13.075977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:31:32.456 [2024-12-09 23:13:13.076166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:31:32.456 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.457 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.715 [2024-12-09 23:13:13.179154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.715 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.716 [2024-12-09 23:13:13.223072] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:31:32.716 [2024-12-09 23:13:13.223111] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4314f0ec-68d4-4efd-8709-69bf83e281dd' was resized: old size 131072, new size 204800 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.716 [2024-12-09 23:13:13.235001] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:31:32.716 [2024-12-09 23:13:13.235037] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2d10f339-33d7-473b-b6bc-aacedab75c68' was resized: old size 131072, new size 204800 00:31:32.716 [2024-12-09 23:13:13.235096] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.716 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.716 [2024-12-09 23:13:13.346908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.979 [2024-12-09 23:13:13.394644] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:31:32.979 [2024-12-09 23:13:13.394879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:31:32.979 [2024-12-09 23:13:13.394917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:31:32.979 [2024-12-09 23:13:13.395093] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:32.979 [2024-12-09 23:13:13.395305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:32.979 [2024-12-09 23:13:13.395374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:32.979 [2024-12-09 23:13:13.395406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.979 [2024-12-09 23:13:13.406528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:31:32.979 [2024-12-09 23:13:13.406602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.979 [2024-12-09 23:13:13.406627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:31:32.979 [2024-12-09 23:13:13.406646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.979 [2024-12-09 23:13:13.409325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.979 [2024-12-09 23:13:13.409533] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:31:32.979 pt0 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:31:32.979 [2024-12-09 23:13:13.411517] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4314f0ec-68d4-4efd-8709-69bf83e281dd 00:31:32.979 [2024-12-09 23:13:13.411594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4314f0ec-68d4-4efd-8709-69bf83e281dd is claimed 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.979 [2024-12-09 23:13:13.411713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2d10f339-33d7-473b-b6bc-aacedab75c68 00:31:32.979 [2024-12-09 23:13:13.411735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2d10f339-33d7-473b-b6bc-aacedab75c68 is claimed 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.979 [2024-12-09 23:13:13.411860] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2d10f339-33d7-473b-b6bc-aacedab75c68 (2) smaller than existing raid bdev Raid (3) 00:31:32.979 [2024-12-09 23:13:13.411887] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 4314f0ec-68d4-4efd-8709-69bf83e281dd: File exists 00:31:32.979 [2024-12-09 23:13:13.411937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:32.979 [2024-12-09 23:13:13.411952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:31:32.979 [2024-12-09 23:13:13.412236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:32.979 [2024-12-09 23:13:13.412417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:32.979 [2024-12-09 23:13:13.412430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:31:32.979 [2024-12-09 23:13:13.412593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.979 [2024-12-09 23:13:13.434835] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60101 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60101 ']' 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60101 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60101 00:31:32.979 killing process with pid 60101 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60101' 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60101 00:31:32.979 [2024-12-09 23:13:13.512232] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:32.979 [2024-12-09 23:13:13.512331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:32.979 [2024-12-09 23:13:13.512408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:32.979 [2024-12-09 23:13:13.512421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:31:32.979 23:13:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60101 00:31:34.905 [2024-12-09 23:13:15.055581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:35.838 23:13:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:31:35.838 00:31:35.838 real 0m4.965s 00:31:35.838 user 0m5.270s 00:31:35.838 sys 0m0.619s 00:31:35.838 23:13:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.838 ************************************ 00:31:35.838 END TEST raid1_resize_superblock_test 00:31:35.838 ************************************ 00:31:35.838 23:13:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.838 23:13:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:31:35.838 23:13:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:31:35.838 23:13:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:31:35.838 23:13:16 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:31:35.838 23:13:16 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:31:35.838 23:13:16 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:31:35.838 23:13:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:35.838 23:13:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:35.838 23:13:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:35.838 ************************************ 00:31:35.838 START TEST raid_function_test_raid0 00:31:35.838 ************************************ 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:31:35.838 Process raid pid: 60203 00:31:35.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60203 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60203' 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60203 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60203 ']' 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.838 23:13:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:31:35.838 [2024-12-09 23:13:16.471412] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:35.838 [2024-12-09 23:13:16.472231] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.096 [2024-12-09 23:13:16.655881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.353 [2024-12-09 23:13:16.788498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.610 [2024-12-09 23:13:17.013453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:36.610 [2024-12-09 23:13:17.013505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:31:36.868 Base_1 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:31:36.868 Base_2 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:31:36.868 [2024-12-09 23:13:17.467717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:31:36.868 [2024-12-09 23:13:17.469883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:31:36.868 [2024-12-09 23:13:17.469960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:36.868 [2024-12-09 23:13:17.469976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:36.868 [2024-12-09 23:13:17.470297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:36.868 [2024-12-09 23:13:17.470460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:36.868 [2024-12-09 23:13:17.470472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:31:36.868 [2024-12-09 23:13:17.470632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:31:36.868 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:37.126 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:31:37.126 [2024-12-09 23:13:17.723459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:37.126 /dev/nbd0 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:37.384 1+0 records in 00:31:37.384 1+0 records out 00:31:37.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407487 s, 10.1 MB/s 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:31:37.384 23:13:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:37.641 { 00:31:37.641 "nbd_device": "/dev/nbd0", 00:31:37.641 "bdev_name": "raid" 00:31:37.641 } 00:31:37.641 ]' 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:37.641 { 00:31:37.641 "nbd_device": "/dev/nbd0", 00:31:37.641 "bdev_name": "raid" 00:31:37.641 } 00:31:37.641 ]' 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:31:37.641 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:31:37.642 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:31:37.642 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:31:37.642 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:31:37.642 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:31:37.642 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:31:37.642 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:31:37.642 4096+0 records in 00:31:37.642 4096+0 records out 00:31:37.642 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0373378 s, 56.2 MB/s 00:31:37.642 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:31:37.900 4096+0 records in 00:31:37.900 4096+0 records out 00:31:37.900 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.29056 s, 7.2 MB/s 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:31:37.900 128+0 records in 00:31:37.900 128+0 records out 00:31:37.900 65536 bytes (66 kB, 64 KiB) copied, 0.00122783 s, 53.4 MB/s 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:31:37.900 2035+0 records in 00:31:37.900 2035+0 records out 00:31:37.900 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0180738 s, 57.6 MB/s 00:31:37.900 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:31:38.158 456+0 records in 00:31:38.158 456+0 records out 00:31:38.158 233472 bytes (233 kB, 228 KiB) copied, 0.00574003 s, 40.7 MB/s 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:38.158 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:38.415 [2024-12-09 23:13:18.808477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:31:38.415 23:13:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60203 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60203 ']' 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60203 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60203 00:31:38.673 killing process with pid 60203 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60203' 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60203 00:31:38.673 [2024-12-09 23:13:19.184036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:38.673 [2024-12-09 23:13:19.184145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:38.673 23:13:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60203 00:31:38.673 [2024-12-09 23:13:19.184198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:38.673 [2024-12-09 23:13:19.184217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:31:38.931 [2024-12-09 23:13:19.407071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:40.339 23:13:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:31:40.339 00:31:40.339 real 0m4.223s 00:31:40.339 user 0m4.872s 00:31:40.339 sys 0m1.111s 00:31:40.339 23:13:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.339 23:13:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:31:40.339 ************************************ 00:31:40.339 END TEST raid_function_test_raid0 00:31:40.339 ************************************ 00:31:40.339 23:13:20 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:31:40.339 23:13:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:40.339 23:13:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.339 23:13:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:40.339 ************************************ 00:31:40.339 START TEST raid_function_test_concat 00:31:40.339 ************************************ 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60338 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60338' 00:31:40.339 Process raid pid: 60338 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60338 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60338 ']' 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.339 23:13:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:31:40.339 [2024-12-09 23:13:20.755474] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:40.339 [2024-12-09 23:13:20.755690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.339 [2024-12-09 23:13:20.943412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.597 [2024-12-09 23:13:21.096576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.854 [2024-12-09 23:13:21.339133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:40.854 [2024-12-09 23:13:21.339196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:41.112 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:41.112 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:31:41.112 23:13:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:31:41.112 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.112 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:31:41.112 Base_1 00:31:41.112 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.112 23:13:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:31:41.112 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.112 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:31:41.370 Base_2 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:31:41.370 [2024-12-09 23:13:21.767690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:31:41.370 [2024-12-09 23:13:21.770172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:31:41.370 [2024-12-09 23:13:21.770263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:41.370 [2024-12-09 23:13:21.770279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:41.370 [2024-12-09 23:13:21.770662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:41.370 [2024-12-09 23:13:21.770862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:41.370 [2024-12-09 23:13:21.770876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:31:41.370 [2024-12-09 23:13:21.771090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:41.370 23:13:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:31:41.628 [2024-12-09 23:13:22.107712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:41.628 /dev/nbd0 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:41.628 1+0 records in 00:31:41.628 1+0 records out 00:31:41.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348236 s, 11.8 MB/s 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:31:41.628 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:31:41.885 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:41.885 { 00:31:41.885 "nbd_device": "/dev/nbd0", 00:31:41.885 "bdev_name": "raid" 00:31:41.885 } 00:31:41.885 ]' 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:42.143 { 00:31:42.143 "nbd_device": "/dev/nbd0", 00:31:42.143 "bdev_name": "raid" 00:31:42.143 } 00:31:42.143 ]' 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:31:42.143 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:31:42.144 4096+0 records in 00:31:42.144 4096+0 records out 00:31:42.144 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0327462 s, 64.0 MB/s 00:31:42.144 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:31:42.406 4096+0 records in 00:31:42.406 4096+0 records out 00:31:42.406 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.287702 s, 7.3 MB/s 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:31:42.406 128+0 records in 00:31:42.406 128+0 records out 00:31:42.406 65536 bytes (66 kB, 64 KiB) copied, 0.00104017 s, 63.0 MB/s 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:31:42.406 2035+0 records in 00:31:42.406 2035+0 records out 00:31:42.406 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.01906 s, 54.7 MB/s 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:31:42.406 456+0 records in 00:31:42.406 456+0 records out 00:31:42.406 233472 bytes (233 kB, 228 KiB) copied, 0.00237178 s, 98.4 MB/s 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:31:42.406 23:13:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:31:42.406 23:13:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:31:42.406 23:13:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:31:42.406 23:13:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:31:42.406 23:13:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:42.406 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:42.406 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:42.406 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:42.406 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:31:42.406 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:42.406 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:42.670 [2024-12-09 23:13:23.263379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:31:42.670 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60338 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60338 ']' 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60338 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:31:43.235 23:13:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.236 23:13:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60338 00:31:43.236 killing process with pid 60338 00:31:43.236 23:13:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:43.236 23:13:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:43.236 23:13:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60338' 00:31:43.236 23:13:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60338 00:31:43.236 23:13:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60338 00:31:43.236 [2024-12-09 23:13:23.677419] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:43.236 [2024-12-09 23:13:23.677543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:43.236 [2024-12-09 23:13:23.677615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:43.236 [2024-12-09 23:13:23.677633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:31:43.493 [2024-12-09 23:13:23.896864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:44.547 ************************************ 00:31:44.547 END TEST raid_function_test_concat 00:31:44.547 ************************************ 00:31:44.547 23:13:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:31:44.547 00:31:44.547 real 0m4.458s 00:31:44.547 user 0m5.381s 00:31:44.547 sys 0m1.078s 00:31:44.547 23:13:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:44.547 23:13:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:31:44.547 23:13:25 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:31:44.547 23:13:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:44.547 23:13:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:44.547 23:13:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:44.547 ************************************ 00:31:44.548 START TEST raid0_resize_test 00:31:44.548 ************************************ 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60467 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60467' 00:31:44.548 Process raid pid: 60467 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60467 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60467 ']' 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:44.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:44.548 23:13:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.807 [2024-12-09 23:13:25.233296] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:44.807 [2024-12-09 23:13:25.233620] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.807 [2024-12-09 23:13:25.421979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.064 [2024-12-09 23:13:25.544979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.322 [2024-12-09 23:13:25.757036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:45.322 [2024-12-09 23:13:25.757243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.582 Base_1 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.582 Base_2 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.582 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.582 [2024-12-09 23:13:26.115596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:31:45.583 [2024-12-09 23:13:26.117638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:31:45.583 [2024-12-09 23:13:26.117848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:45.583 [2024-12-09 23:13:26.117874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:45.583 [2024-12-09 23:13:26.118206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:45.583 [2024-12-09 23:13:26.118347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:45.583 [2024-12-09 23:13:26.118358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:31:45.583 [2024-12-09 23:13:26.118533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.583 [2024-12-09 23:13:26.123572] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:31:45.583 [2024-12-09 23:13:26.123600] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:31:45.583 true 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.583 [2024-12-09 23:13:26.135736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.583 [2024-12-09 23:13:26.179525] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:31:45.583 [2024-12-09 23:13:26.179654] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:31:45.583 [2024-12-09 23:13:26.179807] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:31:45.583 true 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:31:45.583 [2024-12-09 23:13:26.191677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:45.583 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60467 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60467 ']' 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60467 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60467 00:31:45.842 killing process with pid 60467 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60467' 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60467 00:31:45.842 [2024-12-09 23:13:26.280527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:45.842 23:13:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60467 00:31:45.842 [2024-12-09 23:13:26.280617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:45.842 [2024-12-09 23:13:26.280668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:45.842 [2024-12-09 23:13:26.280680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:31:45.842 [2024-12-09 23:13:26.298526] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:47.220 ************************************ 00:31:47.220 END TEST raid0_resize_test 00:31:47.220 ************************************ 00:31:47.220 23:13:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:31:47.220 00:31:47.220 real 0m2.327s 00:31:47.220 user 0m2.477s 00:31:47.220 sys 0m0.374s 00:31:47.220 23:13:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.220 23:13:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.220 23:13:27 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:31:47.220 23:13:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:47.220 23:13:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.220 23:13:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:47.220 ************************************ 00:31:47.220 START TEST raid1_resize_test 00:31:47.220 ************************************ 00:31:47.220 23:13:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60523 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:47.221 Process raid pid: 60523 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60523' 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60523 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60523 ']' 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.221 23:13:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.221 [2024-12-09 23:13:27.632186] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:47.221 [2024-12-09 23:13:27.632522] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.221 [2024-12-09 23:13:27.818926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.480 [2024-12-09 23:13:27.947931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.739 [2024-12-09 23:13:28.183352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:47.739 [2024-12-09 23:13:28.183627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.998 Base_1 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.998 Base_2 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.998 [2024-12-09 23:13:28.541630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:31:47.998 [2024-12-09 23:13:28.543846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:31:47.998 [2024-12-09 23:13:28.543907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:47.998 [2024-12-09 23:13:28.543921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:47.998 [2024-12-09 23:13:28.544185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:47.998 [2024-12-09 23:13:28.544306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:47.998 [2024-12-09 23:13:28.544315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:31:47.998 [2024-12-09 23:13:28.544469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.998 [2024-12-09 23:13:28.549600] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:31:47.998 [2024-12-09 23:13:28.549633] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:31:47.998 true 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.998 [2024-12-09 23:13:28.565782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:31:47.998 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:31:47.999 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.999 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.999 [2024-12-09 23:13:28.613554] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:31:47.999 [2024-12-09 23:13:28.613589] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:31:47.999 [2024-12-09 23:13:28.613625] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:31:47.999 true 00:31:47.999 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.999 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:31:47.999 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:31:47.999 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.999 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.999 [2024-12-09 23:13:28.629716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60523 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60523 ']' 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60523 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60523 00:31:48.257 killing process with pid 60523 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60523' 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60523 00:31:48.257 [2024-12-09 23:13:28.701347] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:48.257 [2024-12-09 23:13:28.701457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:48.257 23:13:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60523 00:31:48.257 [2024-12-09 23:13:28.701981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:48.257 [2024-12-09 23:13:28.702002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:31:48.257 [2024-12-09 23:13:28.721019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:49.701 ************************************ 00:31:49.701 END TEST raid1_resize_test 00:31:49.701 ************************************ 00:31:49.701 23:13:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:31:49.701 00:31:49.701 real 0m2.399s 00:31:49.701 user 0m2.574s 00:31:49.701 sys 0m0.378s 00:31:49.701 23:13:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.701 23:13:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.701 23:13:29 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:31:49.701 23:13:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:31:49.701 23:13:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:31:49.701 23:13:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:49.701 23:13:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.701 23:13:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:49.701 ************************************ 00:31:49.701 START TEST raid_state_function_test 00:31:49.701 ************************************ 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60591 00:31:49.701 Process raid pid: 60591 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60591' 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60591 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60591 ']' 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.701 23:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.701 [2024-12-09 23:13:30.100469] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:49.701 [2024-12-09 23:13:30.100619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.701 [2024-12-09 23:13:30.289307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.004 [2024-12-09 23:13:30.416529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.262 [2024-12-09 23:13:30.654030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:50.262 [2024-12-09 23:13:30.654120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.521 [2024-12-09 23:13:31.025142] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:50.521 [2024-12-09 23:13:31.025213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:50.521 [2024-12-09 23:13:31.025227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:50.521 [2024-12-09 23:13:31.025240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:50.521 "name": "Existed_Raid", 00:31:50.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.521 "strip_size_kb": 64, 00:31:50.521 "state": "configuring", 00:31:50.521 "raid_level": "raid0", 00:31:50.521 "superblock": false, 00:31:50.521 "num_base_bdevs": 2, 00:31:50.521 "num_base_bdevs_discovered": 0, 00:31:50.521 "num_base_bdevs_operational": 2, 00:31:50.521 "base_bdevs_list": [ 00:31:50.521 { 00:31:50.521 "name": "BaseBdev1", 00:31:50.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.521 "is_configured": false, 00:31:50.521 "data_offset": 0, 00:31:50.521 "data_size": 0 00:31:50.521 }, 00:31:50.521 { 00:31:50.521 "name": "BaseBdev2", 00:31:50.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.521 "is_configured": false, 00:31:50.521 "data_offset": 0, 00:31:50.521 "data_size": 0 00:31:50.521 } 00:31:50.521 ] 00:31:50.521 }' 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:50.521 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.089 [2024-12-09 23:13:31.460511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:51.089 [2024-12-09 23:13:31.460558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.089 [2024-12-09 23:13:31.468527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:51.089 [2024-12-09 23:13:31.468596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:51.089 [2024-12-09 23:13:31.468607] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:51.089 [2024-12-09 23:13:31.468624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.089 [2024-12-09 23:13:31.519318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:51.089 BaseBdev1 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.089 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.089 [ 00:31:51.089 { 00:31:51.089 "name": "BaseBdev1", 00:31:51.089 "aliases": [ 00:31:51.089 "a177df7a-daee-462e-8a81-fd6c8140167f" 00:31:51.089 ], 00:31:51.089 "product_name": "Malloc disk", 00:31:51.089 "block_size": 512, 00:31:51.089 "num_blocks": 65536, 00:31:51.089 "uuid": "a177df7a-daee-462e-8a81-fd6c8140167f", 00:31:51.089 "assigned_rate_limits": { 00:31:51.089 "rw_ios_per_sec": 0, 00:31:51.089 "rw_mbytes_per_sec": 0, 00:31:51.089 "r_mbytes_per_sec": 0, 00:31:51.089 "w_mbytes_per_sec": 0 00:31:51.089 }, 00:31:51.089 "claimed": true, 00:31:51.089 "claim_type": "exclusive_write", 00:31:51.089 "zoned": false, 00:31:51.089 "supported_io_types": { 00:31:51.089 "read": true, 00:31:51.089 "write": true, 00:31:51.089 "unmap": true, 00:31:51.089 "flush": true, 00:31:51.089 "reset": true, 00:31:51.089 "nvme_admin": false, 00:31:51.089 "nvme_io": false, 00:31:51.089 "nvme_io_md": false, 00:31:51.089 "write_zeroes": true, 00:31:51.089 "zcopy": true, 00:31:51.090 "get_zone_info": false, 00:31:51.090 "zone_management": false, 00:31:51.090 "zone_append": false, 00:31:51.090 "compare": false, 00:31:51.090 "compare_and_write": false, 00:31:51.090 "abort": true, 00:31:51.090 "seek_hole": false, 00:31:51.090 "seek_data": false, 00:31:51.090 "copy": true, 00:31:51.090 "nvme_iov_md": false 00:31:51.090 }, 00:31:51.090 "memory_domains": [ 00:31:51.090 { 00:31:51.090 "dma_device_id": "system", 00:31:51.090 "dma_device_type": 1 00:31:51.090 }, 00:31:51.090 { 00:31:51.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.090 "dma_device_type": 2 00:31:51.090 } 00:31:51.090 ], 00:31:51.090 "driver_specific": {} 00:31:51.090 } 00:31:51.090 ] 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:51.090 "name": "Existed_Raid", 00:31:51.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.090 "strip_size_kb": 64, 00:31:51.090 "state": "configuring", 00:31:51.090 "raid_level": "raid0", 00:31:51.090 "superblock": false, 00:31:51.090 "num_base_bdevs": 2, 00:31:51.090 "num_base_bdevs_discovered": 1, 00:31:51.090 "num_base_bdevs_operational": 2, 00:31:51.090 "base_bdevs_list": [ 00:31:51.090 { 00:31:51.090 "name": "BaseBdev1", 00:31:51.090 "uuid": "a177df7a-daee-462e-8a81-fd6c8140167f", 00:31:51.090 "is_configured": true, 00:31:51.090 "data_offset": 0, 00:31:51.090 "data_size": 65536 00:31:51.090 }, 00:31:51.090 { 00:31:51.090 "name": "BaseBdev2", 00:31:51.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.090 "is_configured": false, 00:31:51.090 "data_offset": 0, 00:31:51.090 "data_size": 0 00:31:51.090 } 00:31:51.090 ] 00:31:51.090 }' 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:51.090 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.658 23:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:51.658 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.658 23:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.658 [2024-12-09 23:13:32.002731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:51.658 [2024-12-09 23:13:32.002793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.658 [2024-12-09 23:13:32.010765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:51.658 [2024-12-09 23:13:32.012997] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:51.658 [2024-12-09 23:13:32.013050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.658 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:51.658 "name": "Existed_Raid", 00:31:51.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.658 "strip_size_kb": 64, 00:31:51.659 "state": "configuring", 00:31:51.659 "raid_level": "raid0", 00:31:51.659 "superblock": false, 00:31:51.659 "num_base_bdevs": 2, 00:31:51.659 "num_base_bdevs_discovered": 1, 00:31:51.659 "num_base_bdevs_operational": 2, 00:31:51.659 "base_bdevs_list": [ 00:31:51.659 { 00:31:51.659 "name": "BaseBdev1", 00:31:51.659 "uuid": "a177df7a-daee-462e-8a81-fd6c8140167f", 00:31:51.659 "is_configured": true, 00:31:51.659 "data_offset": 0, 00:31:51.659 "data_size": 65536 00:31:51.659 }, 00:31:51.659 { 00:31:51.659 "name": "BaseBdev2", 00:31:51.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.659 "is_configured": false, 00:31:51.659 "data_offset": 0, 00:31:51.659 "data_size": 0 00:31:51.659 } 00:31:51.659 ] 00:31:51.659 }' 00:31:51.659 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:51.659 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.916 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:51.916 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.916 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.916 [2024-12-09 23:13:32.506553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:51.916 [2024-12-09 23:13:32.506617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:51.917 [2024-12-09 23:13:32.506629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:51.917 [2024-12-09 23:13:32.506953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:51.917 [2024-12-09 23:13:32.507133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:51.917 [2024-12-09 23:13:32.507148] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:51.917 [2024-12-09 23:13:32.507466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:51.917 BaseBdev2 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.917 [ 00:31:51.917 { 00:31:51.917 "name": "BaseBdev2", 00:31:51.917 "aliases": [ 00:31:51.917 "dd1758c3-f88a-4b23-9dac-438d2c93fb65" 00:31:51.917 ], 00:31:51.917 "product_name": "Malloc disk", 00:31:51.917 "block_size": 512, 00:31:51.917 "num_blocks": 65536, 00:31:51.917 "uuid": "dd1758c3-f88a-4b23-9dac-438d2c93fb65", 00:31:51.917 "assigned_rate_limits": { 00:31:51.917 "rw_ios_per_sec": 0, 00:31:51.917 "rw_mbytes_per_sec": 0, 00:31:51.917 "r_mbytes_per_sec": 0, 00:31:51.917 "w_mbytes_per_sec": 0 00:31:51.917 }, 00:31:51.917 "claimed": true, 00:31:51.917 "claim_type": "exclusive_write", 00:31:51.917 "zoned": false, 00:31:51.917 "supported_io_types": { 00:31:51.917 "read": true, 00:31:51.917 "write": true, 00:31:51.917 "unmap": true, 00:31:51.917 "flush": true, 00:31:51.917 "reset": true, 00:31:51.917 "nvme_admin": false, 00:31:51.917 "nvme_io": false, 00:31:51.917 "nvme_io_md": false, 00:31:51.917 "write_zeroes": true, 00:31:51.917 "zcopy": true, 00:31:51.917 "get_zone_info": false, 00:31:51.917 "zone_management": false, 00:31:51.917 "zone_append": false, 00:31:51.917 "compare": false, 00:31:51.917 "compare_and_write": false, 00:31:51.917 "abort": true, 00:31:51.917 "seek_hole": false, 00:31:51.917 "seek_data": false, 00:31:51.917 "copy": true, 00:31:51.917 "nvme_iov_md": false 00:31:51.917 }, 00:31:51.917 "memory_domains": [ 00:31:51.917 { 00:31:51.917 "dma_device_id": "system", 00:31:51.917 "dma_device_type": 1 00:31:51.917 }, 00:31:51.917 { 00:31:51.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.917 "dma_device_type": 2 00:31:51.917 } 00:31:51.917 ], 00:31:51.917 "driver_specific": {} 00:31:51.917 } 00:31:51.917 ] 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:51.917 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:52.176 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.176 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:52.176 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.176 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.176 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.176 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:52.176 "name": "Existed_Raid", 00:31:52.176 "uuid": "4a9cea29-e496-4e1b-84eb-0fadb7f423d7", 00:31:52.176 "strip_size_kb": 64, 00:31:52.176 "state": "online", 00:31:52.176 "raid_level": "raid0", 00:31:52.176 "superblock": false, 00:31:52.176 "num_base_bdevs": 2, 00:31:52.176 "num_base_bdevs_discovered": 2, 00:31:52.176 "num_base_bdevs_operational": 2, 00:31:52.176 "base_bdevs_list": [ 00:31:52.176 { 00:31:52.176 "name": "BaseBdev1", 00:31:52.176 "uuid": "a177df7a-daee-462e-8a81-fd6c8140167f", 00:31:52.176 "is_configured": true, 00:31:52.176 "data_offset": 0, 00:31:52.176 "data_size": 65536 00:31:52.176 }, 00:31:52.176 { 00:31:52.176 "name": "BaseBdev2", 00:31:52.176 "uuid": "dd1758c3-f88a-4b23-9dac-438d2c93fb65", 00:31:52.176 "is_configured": true, 00:31:52.176 "data_offset": 0, 00:31:52.176 "data_size": 65536 00:31:52.176 } 00:31:52.176 ] 00:31:52.176 }' 00:31:52.176 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:52.176 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:52.435 [2024-12-09 23:13:32.946480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:52.435 "name": "Existed_Raid", 00:31:52.435 "aliases": [ 00:31:52.435 "4a9cea29-e496-4e1b-84eb-0fadb7f423d7" 00:31:52.435 ], 00:31:52.435 "product_name": "Raid Volume", 00:31:52.435 "block_size": 512, 00:31:52.435 "num_blocks": 131072, 00:31:52.435 "uuid": "4a9cea29-e496-4e1b-84eb-0fadb7f423d7", 00:31:52.435 "assigned_rate_limits": { 00:31:52.435 "rw_ios_per_sec": 0, 00:31:52.435 "rw_mbytes_per_sec": 0, 00:31:52.435 "r_mbytes_per_sec": 0, 00:31:52.435 "w_mbytes_per_sec": 0 00:31:52.435 }, 00:31:52.435 "claimed": false, 00:31:52.435 "zoned": false, 00:31:52.435 "supported_io_types": { 00:31:52.435 "read": true, 00:31:52.435 "write": true, 00:31:52.435 "unmap": true, 00:31:52.435 "flush": true, 00:31:52.435 "reset": true, 00:31:52.435 "nvme_admin": false, 00:31:52.435 "nvme_io": false, 00:31:52.435 "nvme_io_md": false, 00:31:52.435 "write_zeroes": true, 00:31:52.435 "zcopy": false, 00:31:52.435 "get_zone_info": false, 00:31:52.435 "zone_management": false, 00:31:52.435 "zone_append": false, 00:31:52.435 "compare": false, 00:31:52.435 "compare_and_write": false, 00:31:52.435 "abort": false, 00:31:52.435 "seek_hole": false, 00:31:52.435 "seek_data": false, 00:31:52.435 "copy": false, 00:31:52.435 "nvme_iov_md": false 00:31:52.435 }, 00:31:52.435 "memory_domains": [ 00:31:52.435 { 00:31:52.435 "dma_device_id": "system", 00:31:52.435 "dma_device_type": 1 00:31:52.435 }, 00:31:52.435 { 00:31:52.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:52.435 "dma_device_type": 2 00:31:52.435 }, 00:31:52.435 { 00:31:52.435 "dma_device_id": "system", 00:31:52.435 "dma_device_type": 1 00:31:52.435 }, 00:31:52.435 { 00:31:52.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:52.435 "dma_device_type": 2 00:31:52.435 } 00:31:52.435 ], 00:31:52.435 "driver_specific": { 00:31:52.435 "raid": { 00:31:52.435 "uuid": "4a9cea29-e496-4e1b-84eb-0fadb7f423d7", 00:31:52.435 "strip_size_kb": 64, 00:31:52.435 "state": "online", 00:31:52.435 "raid_level": "raid0", 00:31:52.435 "superblock": false, 00:31:52.435 "num_base_bdevs": 2, 00:31:52.435 "num_base_bdevs_discovered": 2, 00:31:52.435 "num_base_bdevs_operational": 2, 00:31:52.435 "base_bdevs_list": [ 00:31:52.435 { 00:31:52.435 "name": "BaseBdev1", 00:31:52.435 "uuid": "a177df7a-daee-462e-8a81-fd6c8140167f", 00:31:52.435 "is_configured": true, 00:31:52.435 "data_offset": 0, 00:31:52.435 "data_size": 65536 00:31:52.435 }, 00:31:52.435 { 00:31:52.435 "name": "BaseBdev2", 00:31:52.435 "uuid": "dd1758c3-f88a-4b23-9dac-438d2c93fb65", 00:31:52.435 "is_configured": true, 00:31:52.435 "data_offset": 0, 00:31:52.435 "data_size": 65536 00:31:52.435 } 00:31:52.435 ] 00:31:52.435 } 00:31:52.435 } 00:31:52.435 }' 00:31:52.435 23:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:52.435 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:52.435 BaseBdev2' 00:31:52.435 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:52.435 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:52.435 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:52.435 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:52.435 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:52.435 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.435 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.693 [2024-12-09 23:13:33.138238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:52.693 [2024-12-09 23:13:33.138283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:52.693 [2024-12-09 23:13:33.138341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:52.693 "name": "Existed_Raid", 00:31:52.693 "uuid": "4a9cea29-e496-4e1b-84eb-0fadb7f423d7", 00:31:52.693 "strip_size_kb": 64, 00:31:52.693 "state": "offline", 00:31:52.693 "raid_level": "raid0", 00:31:52.693 "superblock": false, 00:31:52.693 "num_base_bdevs": 2, 00:31:52.693 "num_base_bdevs_discovered": 1, 00:31:52.693 "num_base_bdevs_operational": 1, 00:31:52.693 "base_bdevs_list": [ 00:31:52.693 { 00:31:52.693 "name": null, 00:31:52.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:52.693 "is_configured": false, 00:31:52.693 "data_offset": 0, 00:31:52.693 "data_size": 65536 00:31:52.693 }, 00:31:52.693 { 00:31:52.693 "name": "BaseBdev2", 00:31:52.693 "uuid": "dd1758c3-f88a-4b23-9dac-438d2c93fb65", 00:31:52.693 "is_configured": true, 00:31:52.693 "data_offset": 0, 00:31:52.693 "data_size": 65536 00:31:52.693 } 00:31:52.693 ] 00:31:52.693 }' 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:52.693 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.287 [2024-12-09 23:13:33.722713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:53.287 [2024-12-09 23:13:33.722781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60591 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60591 ']' 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60591 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60591 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:53.287 killing process with pid 60591 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60591' 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60591 00:31:53.287 [2024-12-09 23:13:33.921489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:53.287 23:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60591 00:31:53.546 [2024-12-09 23:13:33.939262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:31:54.921 00:31:54.921 real 0m5.132s 00:31:54.921 user 0m7.390s 00:31:54.921 sys 0m0.866s 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:54.921 ************************************ 00:31:54.921 END TEST raid_state_function_test 00:31:54.921 ************************************ 00:31:54.921 23:13:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:31:54.921 23:13:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:54.921 23:13:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.921 23:13:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:54.921 ************************************ 00:31:54.921 START TEST raid_state_function_test_sb 00:31:54.921 ************************************ 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:54.921 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60843 00:31:54.922 Process raid pid: 60843 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60843' 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60843 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60843 ']' 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.922 23:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.922 [2024-12-09 23:13:35.315735] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:31:54.922 [2024-12-09 23:13:35.315878] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.922 [2024-12-09 23:13:35.512324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.180 [2024-12-09 23:13:35.635132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.438 [2024-12-09 23:13:35.880842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:55.438 [2024-12-09 23:13:35.880902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.696 [2024-12-09 23:13:36.169437] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:55.696 [2024-12-09 23:13:36.169516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:55.696 [2024-12-09 23:13:36.169529] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:55.696 [2024-12-09 23:13:36.169543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.696 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.697 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.697 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:55.697 "name": "Existed_Raid", 00:31:55.697 "uuid": "f8df5787-1e45-455f-92b0-d7fee2947fea", 00:31:55.697 "strip_size_kb": 64, 00:31:55.697 "state": "configuring", 00:31:55.697 "raid_level": "raid0", 00:31:55.697 "superblock": true, 00:31:55.697 "num_base_bdevs": 2, 00:31:55.697 "num_base_bdevs_discovered": 0, 00:31:55.697 "num_base_bdevs_operational": 2, 00:31:55.697 "base_bdevs_list": [ 00:31:55.697 { 00:31:55.697 "name": "BaseBdev1", 00:31:55.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.697 "is_configured": false, 00:31:55.697 "data_offset": 0, 00:31:55.697 "data_size": 0 00:31:55.697 }, 00:31:55.697 { 00:31:55.697 "name": "BaseBdev2", 00:31:55.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.697 "is_configured": false, 00:31:55.697 "data_offset": 0, 00:31:55.697 "data_size": 0 00:31:55.697 } 00:31:55.697 ] 00:31:55.697 }' 00:31:55.697 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:55.697 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.955 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:55.955 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.955 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.214 [2024-12-09 23:13:36.592792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:56.214 [2024-12-09 23:13:36.592839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.214 [2024-12-09 23:13:36.600757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:56.214 [2024-12-09 23:13:36.600804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:56.214 [2024-12-09 23:13:36.600832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:56.214 [2024-12-09 23:13:36.600848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.214 [2024-12-09 23:13:36.648222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:56.214 BaseBdev1 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.214 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.214 [ 00:31:56.214 { 00:31:56.214 "name": "BaseBdev1", 00:31:56.214 "aliases": [ 00:31:56.214 "5acdd3e9-9a2e-409d-8d27-c0c277877fbe" 00:31:56.214 ], 00:31:56.214 "product_name": "Malloc disk", 00:31:56.214 "block_size": 512, 00:31:56.214 "num_blocks": 65536, 00:31:56.214 "uuid": "5acdd3e9-9a2e-409d-8d27-c0c277877fbe", 00:31:56.214 "assigned_rate_limits": { 00:31:56.214 "rw_ios_per_sec": 0, 00:31:56.214 "rw_mbytes_per_sec": 0, 00:31:56.214 "r_mbytes_per_sec": 0, 00:31:56.214 "w_mbytes_per_sec": 0 00:31:56.214 }, 00:31:56.214 "claimed": true, 00:31:56.214 "claim_type": "exclusive_write", 00:31:56.214 "zoned": false, 00:31:56.214 "supported_io_types": { 00:31:56.215 "read": true, 00:31:56.215 "write": true, 00:31:56.215 "unmap": true, 00:31:56.215 "flush": true, 00:31:56.215 "reset": true, 00:31:56.215 "nvme_admin": false, 00:31:56.215 "nvme_io": false, 00:31:56.215 "nvme_io_md": false, 00:31:56.215 "write_zeroes": true, 00:31:56.215 "zcopy": true, 00:31:56.215 "get_zone_info": false, 00:31:56.215 "zone_management": false, 00:31:56.215 "zone_append": false, 00:31:56.215 "compare": false, 00:31:56.215 "compare_and_write": false, 00:31:56.215 "abort": true, 00:31:56.215 "seek_hole": false, 00:31:56.215 "seek_data": false, 00:31:56.215 "copy": true, 00:31:56.215 "nvme_iov_md": false 00:31:56.215 }, 00:31:56.215 "memory_domains": [ 00:31:56.215 { 00:31:56.215 "dma_device_id": "system", 00:31:56.215 "dma_device_type": 1 00:31:56.215 }, 00:31:56.215 { 00:31:56.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:56.215 "dma_device_type": 2 00:31:56.215 } 00:31:56.215 ], 00:31:56.215 "driver_specific": {} 00:31:56.215 } 00:31:56.215 ] 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.215 "name": "Existed_Raid", 00:31:56.215 "uuid": "524c721d-a949-4055-8d2f-3de135f44513", 00:31:56.215 "strip_size_kb": 64, 00:31:56.215 "state": "configuring", 00:31:56.215 "raid_level": "raid0", 00:31:56.215 "superblock": true, 00:31:56.215 "num_base_bdevs": 2, 00:31:56.215 "num_base_bdevs_discovered": 1, 00:31:56.215 "num_base_bdevs_operational": 2, 00:31:56.215 "base_bdevs_list": [ 00:31:56.215 { 00:31:56.215 "name": "BaseBdev1", 00:31:56.215 "uuid": "5acdd3e9-9a2e-409d-8d27-c0c277877fbe", 00:31:56.215 "is_configured": true, 00:31:56.215 "data_offset": 2048, 00:31:56.215 "data_size": 63488 00:31:56.215 }, 00:31:56.215 { 00:31:56.215 "name": "BaseBdev2", 00:31:56.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.215 "is_configured": false, 00:31:56.215 "data_offset": 0, 00:31:56.215 "data_size": 0 00:31:56.215 } 00:31:56.215 ] 00:31:56.215 }' 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.215 23:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.782 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:56.782 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.782 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.782 [2024-12-09 23:13:37.115637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:56.782 [2024-12-09 23:13:37.115840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:56.782 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.782 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:56.782 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.782 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.782 [2024-12-09 23:13:37.127694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:56.782 [2024-12-09 23:13:37.129886] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:56.782 [2024-12-09 23:13:37.130093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:56.782 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.782 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.783 "name": "Existed_Raid", 00:31:56.783 "uuid": "1c4163a6-2cb9-4fbe-bf27-fc09c24955cc", 00:31:56.783 "strip_size_kb": 64, 00:31:56.783 "state": "configuring", 00:31:56.783 "raid_level": "raid0", 00:31:56.783 "superblock": true, 00:31:56.783 "num_base_bdevs": 2, 00:31:56.783 "num_base_bdevs_discovered": 1, 00:31:56.783 "num_base_bdevs_operational": 2, 00:31:56.783 "base_bdevs_list": [ 00:31:56.783 { 00:31:56.783 "name": "BaseBdev1", 00:31:56.783 "uuid": "5acdd3e9-9a2e-409d-8d27-c0c277877fbe", 00:31:56.783 "is_configured": true, 00:31:56.783 "data_offset": 2048, 00:31:56.783 "data_size": 63488 00:31:56.783 }, 00:31:56.783 { 00:31:56.783 "name": "BaseBdev2", 00:31:56.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.783 "is_configured": false, 00:31:56.783 "data_offset": 0, 00:31:56.783 "data_size": 0 00:31:56.783 } 00:31:56.783 ] 00:31:56.783 }' 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.783 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.041 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:57.041 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.041 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.041 [2024-12-09 23:13:37.610605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:57.041 [2024-12-09 23:13:37.610869] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:57.041 [2024-12-09 23:13:37.610888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:57.041 [2024-12-09 23:13:37.611216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:57.041 BaseBdev2 00:31:57.041 [2024-12-09 23:13:37.611364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:57.041 [2024-12-09 23:13:37.611381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:57.041 [2024-12-09 23:13:37.611541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:57.041 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.041 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:57.041 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:57.041 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:57.041 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.042 [ 00:31:57.042 { 00:31:57.042 "name": "BaseBdev2", 00:31:57.042 "aliases": [ 00:31:57.042 "8a68993b-0f45-45f6-b0eb-213526f24d0f" 00:31:57.042 ], 00:31:57.042 "product_name": "Malloc disk", 00:31:57.042 "block_size": 512, 00:31:57.042 "num_blocks": 65536, 00:31:57.042 "uuid": "8a68993b-0f45-45f6-b0eb-213526f24d0f", 00:31:57.042 "assigned_rate_limits": { 00:31:57.042 "rw_ios_per_sec": 0, 00:31:57.042 "rw_mbytes_per_sec": 0, 00:31:57.042 "r_mbytes_per_sec": 0, 00:31:57.042 "w_mbytes_per_sec": 0 00:31:57.042 }, 00:31:57.042 "claimed": true, 00:31:57.042 "claim_type": "exclusive_write", 00:31:57.042 "zoned": false, 00:31:57.042 "supported_io_types": { 00:31:57.042 "read": true, 00:31:57.042 "write": true, 00:31:57.042 "unmap": true, 00:31:57.042 "flush": true, 00:31:57.042 "reset": true, 00:31:57.042 "nvme_admin": false, 00:31:57.042 "nvme_io": false, 00:31:57.042 "nvme_io_md": false, 00:31:57.042 "write_zeroes": true, 00:31:57.042 "zcopy": true, 00:31:57.042 "get_zone_info": false, 00:31:57.042 "zone_management": false, 00:31:57.042 "zone_append": false, 00:31:57.042 "compare": false, 00:31:57.042 "compare_and_write": false, 00:31:57.042 "abort": true, 00:31:57.042 "seek_hole": false, 00:31:57.042 "seek_data": false, 00:31:57.042 "copy": true, 00:31:57.042 "nvme_iov_md": false 00:31:57.042 }, 00:31:57.042 "memory_domains": [ 00:31:57.042 { 00:31:57.042 "dma_device_id": "system", 00:31:57.042 "dma_device_type": 1 00:31:57.042 }, 00:31:57.042 { 00:31:57.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.042 "dma_device_type": 2 00:31:57.042 } 00:31:57.042 ], 00:31:57.042 "driver_specific": {} 00:31:57.042 } 00:31:57.042 ] 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.042 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.300 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.300 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:57.300 "name": "Existed_Raid", 00:31:57.300 "uuid": "1c4163a6-2cb9-4fbe-bf27-fc09c24955cc", 00:31:57.300 "strip_size_kb": 64, 00:31:57.300 "state": "online", 00:31:57.300 "raid_level": "raid0", 00:31:57.300 "superblock": true, 00:31:57.300 "num_base_bdevs": 2, 00:31:57.300 "num_base_bdevs_discovered": 2, 00:31:57.300 "num_base_bdevs_operational": 2, 00:31:57.300 "base_bdevs_list": [ 00:31:57.300 { 00:31:57.300 "name": "BaseBdev1", 00:31:57.301 "uuid": "5acdd3e9-9a2e-409d-8d27-c0c277877fbe", 00:31:57.301 "is_configured": true, 00:31:57.301 "data_offset": 2048, 00:31:57.301 "data_size": 63488 00:31:57.301 }, 00:31:57.301 { 00:31:57.301 "name": "BaseBdev2", 00:31:57.301 "uuid": "8a68993b-0f45-45f6-b0eb-213526f24d0f", 00:31:57.301 "is_configured": true, 00:31:57.301 "data_offset": 2048, 00:31:57.301 "data_size": 63488 00:31:57.301 } 00:31:57.301 ] 00:31:57.301 }' 00:31:57.301 23:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:57.301 23:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.559 [2024-12-09 23:13:38.090472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.559 "name": "Existed_Raid", 00:31:57.559 "aliases": [ 00:31:57.559 "1c4163a6-2cb9-4fbe-bf27-fc09c24955cc" 00:31:57.559 ], 00:31:57.559 "product_name": "Raid Volume", 00:31:57.559 "block_size": 512, 00:31:57.559 "num_blocks": 126976, 00:31:57.559 "uuid": "1c4163a6-2cb9-4fbe-bf27-fc09c24955cc", 00:31:57.559 "assigned_rate_limits": { 00:31:57.559 "rw_ios_per_sec": 0, 00:31:57.559 "rw_mbytes_per_sec": 0, 00:31:57.559 "r_mbytes_per_sec": 0, 00:31:57.559 "w_mbytes_per_sec": 0 00:31:57.559 }, 00:31:57.559 "claimed": false, 00:31:57.559 "zoned": false, 00:31:57.559 "supported_io_types": { 00:31:57.559 "read": true, 00:31:57.559 "write": true, 00:31:57.559 "unmap": true, 00:31:57.559 "flush": true, 00:31:57.559 "reset": true, 00:31:57.559 "nvme_admin": false, 00:31:57.559 "nvme_io": false, 00:31:57.559 "nvme_io_md": false, 00:31:57.559 "write_zeroes": true, 00:31:57.559 "zcopy": false, 00:31:57.559 "get_zone_info": false, 00:31:57.559 "zone_management": false, 00:31:57.559 "zone_append": false, 00:31:57.559 "compare": false, 00:31:57.559 "compare_and_write": false, 00:31:57.559 "abort": false, 00:31:57.559 "seek_hole": false, 00:31:57.559 "seek_data": false, 00:31:57.559 "copy": false, 00:31:57.559 "nvme_iov_md": false 00:31:57.559 }, 00:31:57.559 "memory_domains": [ 00:31:57.559 { 00:31:57.559 "dma_device_id": "system", 00:31:57.559 "dma_device_type": 1 00:31:57.559 }, 00:31:57.559 { 00:31:57.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.559 "dma_device_type": 2 00:31:57.559 }, 00:31:57.559 { 00:31:57.559 "dma_device_id": "system", 00:31:57.559 "dma_device_type": 1 00:31:57.559 }, 00:31:57.559 { 00:31:57.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.559 "dma_device_type": 2 00:31:57.559 } 00:31:57.559 ], 00:31:57.559 "driver_specific": { 00:31:57.559 "raid": { 00:31:57.559 "uuid": "1c4163a6-2cb9-4fbe-bf27-fc09c24955cc", 00:31:57.559 "strip_size_kb": 64, 00:31:57.559 "state": "online", 00:31:57.559 "raid_level": "raid0", 00:31:57.559 "superblock": true, 00:31:57.559 "num_base_bdevs": 2, 00:31:57.559 "num_base_bdevs_discovered": 2, 00:31:57.559 "num_base_bdevs_operational": 2, 00:31:57.559 "base_bdevs_list": [ 00:31:57.559 { 00:31:57.559 "name": "BaseBdev1", 00:31:57.559 "uuid": "5acdd3e9-9a2e-409d-8d27-c0c277877fbe", 00:31:57.559 "is_configured": true, 00:31:57.559 "data_offset": 2048, 00:31:57.559 "data_size": 63488 00:31:57.559 }, 00:31:57.559 { 00:31:57.559 "name": "BaseBdev2", 00:31:57.559 "uuid": "8a68993b-0f45-45f6-b0eb-213526f24d0f", 00:31:57.559 "is_configured": true, 00:31:57.559 "data_offset": 2048, 00:31:57.559 "data_size": 63488 00:31:57.559 } 00:31:57.559 ] 00:31:57.559 } 00:31:57.559 } 00:31:57.559 }' 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:57.559 BaseBdev2' 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:57.559 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.818 [2024-12-09 23:13:38.290013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:57.818 [2024-12-09 23:13:38.290071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:57.818 [2024-12-09 23:13:38.290141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:57.818 "name": "Existed_Raid", 00:31:57.818 "uuid": "1c4163a6-2cb9-4fbe-bf27-fc09c24955cc", 00:31:57.818 "strip_size_kb": 64, 00:31:57.818 "state": "offline", 00:31:57.818 "raid_level": "raid0", 00:31:57.818 "superblock": true, 00:31:57.818 "num_base_bdevs": 2, 00:31:57.818 "num_base_bdevs_discovered": 1, 00:31:57.818 "num_base_bdevs_operational": 1, 00:31:57.818 "base_bdevs_list": [ 00:31:57.818 { 00:31:57.818 "name": null, 00:31:57.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.818 "is_configured": false, 00:31:57.818 "data_offset": 0, 00:31:57.818 "data_size": 63488 00:31:57.818 }, 00:31:57.818 { 00:31:57.818 "name": "BaseBdev2", 00:31:57.818 "uuid": "8a68993b-0f45-45f6-b0eb-213526f24d0f", 00:31:57.818 "is_configured": true, 00:31:57.818 "data_offset": 2048, 00:31:57.818 "data_size": 63488 00:31:57.818 } 00:31:57.818 ] 00:31:57.818 }' 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:57.818 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.386 [2024-12-09 23:13:38.838253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:58.386 [2024-12-09 23:13:38.838317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.386 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60843 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60843 ']' 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60843 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.387 23:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60843 00:31:58.645 killing process with pid 60843 00:31:58.646 23:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:58.646 23:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:58.646 23:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60843' 00:31:58.646 23:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60843 00:31:58.646 [2024-12-09 23:13:39.024938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:58.646 23:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60843 00:31:58.646 [2024-12-09 23:13:39.042779] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:00.021 23:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:00.021 00:32:00.021 real 0m5.033s 00:32:00.021 user 0m7.125s 00:32:00.021 sys 0m0.908s 00:32:00.021 23:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.021 23:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.021 ************************************ 00:32:00.021 END TEST raid_state_function_test_sb 00:32:00.021 ************************************ 00:32:00.021 23:13:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:32:00.021 23:13:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:00.021 23:13:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.022 23:13:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:00.022 ************************************ 00:32:00.022 START TEST raid_superblock_test 00:32:00.022 ************************************ 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:00.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61091 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61091 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61091 ']' 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.022 23:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.022 [2024-12-09 23:13:40.417850] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:00.022 [2024-12-09 23:13:40.417990] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61091 ] 00:32:00.022 [2024-12-09 23:13:40.604730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.280 [2024-12-09 23:13:40.726648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.542 [2024-12-09 23:13:40.937886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:00.542 [2024-12-09 23:13:40.937933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.804 malloc1 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.804 [2024-12-09 23:13:41.328442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:00.804 [2024-12-09 23:13:41.328641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:00.804 [2024-12-09 23:13:41.328726] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:00.804 [2024-12-09 23:13:41.328832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:00.804 [2024-12-09 23:13:41.331449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:00.804 [2024-12-09 23:13:41.331598] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:00.804 pt1 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.804 malloc2 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.804 [2024-12-09 23:13:41.393438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:00.804 [2024-12-09 23:13:41.393619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:00.804 [2024-12-09 23:13:41.393655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:00.804 [2024-12-09 23:13:41.393668] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:00.804 [2024-12-09 23:13:41.396240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:00.804 [2024-12-09 23:13:41.396283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:00.804 pt2 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.804 [2024-12-09 23:13:41.405486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:00.804 [2024-12-09 23:13:41.407738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:00.804 [2024-12-09 23:13:41.407898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:00.804 [2024-12-09 23:13:41.407914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:00.804 [2024-12-09 23:13:41.408193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:00.804 [2024-12-09 23:13:41.408340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:00.804 [2024-12-09 23:13:41.408354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:00.804 [2024-12-09 23:13:41.408528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:00.804 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:00.805 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:00.805 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:00.805 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:00.805 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:00.805 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:00.805 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.805 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.805 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.805 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.805 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.063 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:01.063 "name": "raid_bdev1", 00:32:01.063 "uuid": "b01cbeba-fd52-4bb8-bed4-a20f586cf6c7", 00:32:01.063 "strip_size_kb": 64, 00:32:01.063 "state": "online", 00:32:01.063 "raid_level": "raid0", 00:32:01.063 "superblock": true, 00:32:01.063 "num_base_bdevs": 2, 00:32:01.063 "num_base_bdevs_discovered": 2, 00:32:01.063 "num_base_bdevs_operational": 2, 00:32:01.063 "base_bdevs_list": [ 00:32:01.063 { 00:32:01.063 "name": "pt1", 00:32:01.063 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:01.063 "is_configured": true, 00:32:01.063 "data_offset": 2048, 00:32:01.063 "data_size": 63488 00:32:01.063 }, 00:32:01.063 { 00:32:01.063 "name": "pt2", 00:32:01.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:01.063 "is_configured": true, 00:32:01.063 "data_offset": 2048, 00:32:01.063 "data_size": 63488 00:32:01.063 } 00:32:01.063 ] 00:32:01.063 }' 00:32:01.063 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:01.063 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:01.321 [2024-12-09 23:13:41.833133] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:01.321 "name": "raid_bdev1", 00:32:01.321 "aliases": [ 00:32:01.321 "b01cbeba-fd52-4bb8-bed4-a20f586cf6c7" 00:32:01.321 ], 00:32:01.321 "product_name": "Raid Volume", 00:32:01.321 "block_size": 512, 00:32:01.321 "num_blocks": 126976, 00:32:01.321 "uuid": "b01cbeba-fd52-4bb8-bed4-a20f586cf6c7", 00:32:01.321 "assigned_rate_limits": { 00:32:01.321 "rw_ios_per_sec": 0, 00:32:01.321 "rw_mbytes_per_sec": 0, 00:32:01.321 "r_mbytes_per_sec": 0, 00:32:01.321 "w_mbytes_per_sec": 0 00:32:01.321 }, 00:32:01.321 "claimed": false, 00:32:01.321 "zoned": false, 00:32:01.321 "supported_io_types": { 00:32:01.321 "read": true, 00:32:01.321 "write": true, 00:32:01.321 "unmap": true, 00:32:01.321 "flush": true, 00:32:01.321 "reset": true, 00:32:01.321 "nvme_admin": false, 00:32:01.321 "nvme_io": false, 00:32:01.321 "nvme_io_md": false, 00:32:01.321 "write_zeroes": true, 00:32:01.321 "zcopy": false, 00:32:01.321 "get_zone_info": false, 00:32:01.321 "zone_management": false, 00:32:01.321 "zone_append": false, 00:32:01.321 "compare": false, 00:32:01.321 "compare_and_write": false, 00:32:01.321 "abort": false, 00:32:01.321 "seek_hole": false, 00:32:01.321 "seek_data": false, 00:32:01.321 "copy": false, 00:32:01.321 "nvme_iov_md": false 00:32:01.321 }, 00:32:01.321 "memory_domains": [ 00:32:01.321 { 00:32:01.321 "dma_device_id": "system", 00:32:01.321 "dma_device_type": 1 00:32:01.321 }, 00:32:01.321 { 00:32:01.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.321 "dma_device_type": 2 00:32:01.321 }, 00:32:01.321 { 00:32:01.321 "dma_device_id": "system", 00:32:01.321 "dma_device_type": 1 00:32:01.321 }, 00:32:01.321 { 00:32:01.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.321 "dma_device_type": 2 00:32:01.321 } 00:32:01.321 ], 00:32:01.321 "driver_specific": { 00:32:01.321 "raid": { 00:32:01.321 "uuid": "b01cbeba-fd52-4bb8-bed4-a20f586cf6c7", 00:32:01.321 "strip_size_kb": 64, 00:32:01.321 "state": "online", 00:32:01.321 "raid_level": "raid0", 00:32:01.321 "superblock": true, 00:32:01.321 "num_base_bdevs": 2, 00:32:01.321 "num_base_bdevs_discovered": 2, 00:32:01.321 "num_base_bdevs_operational": 2, 00:32:01.321 "base_bdevs_list": [ 00:32:01.321 { 00:32:01.321 "name": "pt1", 00:32:01.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:01.321 "is_configured": true, 00:32:01.321 "data_offset": 2048, 00:32:01.321 "data_size": 63488 00:32:01.321 }, 00:32:01.321 { 00:32:01.321 "name": "pt2", 00:32:01.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:01.321 "is_configured": true, 00:32:01.321 "data_offset": 2048, 00:32:01.321 "data_size": 63488 00:32:01.321 } 00:32:01.321 ] 00:32:01.321 } 00:32:01.321 } 00:32:01.321 }' 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:01.321 pt2' 00:32:01.321 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:01.580 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:01.580 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:01.580 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:01.580 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.580 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.580 23:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:01.580 23:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.580 [2024-12-09 23:13:42.080847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b01cbeba-fd52-4bb8-bed4-a20f586cf6c7 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b01cbeba-fd52-4bb8-bed4-a20f586cf6c7 ']' 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.580 [2024-12-09 23:13:42.132532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:01.580 [2024-12-09 23:13:42.132566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:01.580 [2024-12-09 23:13:42.132661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:01.580 [2024-12-09 23:13:42.132713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:01.580 [2024-12-09 23:13:42.132729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.580 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.839 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.839 [2024-12-09 23:13:42.252593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:01.839 [2024-12-09 23:13:42.254950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:01.839 [2024-12-09 23:13:42.255026] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:01.839 [2024-12-09 23:13:42.255088] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:01.840 [2024-12-09 23:13:42.255109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:01.840 [2024-12-09 23:13:42.255125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:01.840 request: 00:32:01.840 { 00:32:01.840 "name": "raid_bdev1", 00:32:01.840 "raid_level": "raid0", 00:32:01.840 "base_bdevs": [ 00:32:01.840 "malloc1", 00:32:01.840 "malloc2" 00:32:01.840 ], 00:32:01.840 "strip_size_kb": 64, 00:32:01.840 "superblock": false, 00:32:01.840 "method": "bdev_raid_create", 00:32:01.840 "req_id": 1 00:32:01.840 } 00:32:01.840 Got JSON-RPC error response 00:32:01.840 response: 00:32:01.840 { 00:32:01.840 "code": -17, 00:32:01.840 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:01.840 } 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.840 [2024-12-09 23:13:42.304565] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:01.840 [2024-12-09 23:13:42.304798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.840 [2024-12-09 23:13:42.304867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:01.840 [2024-12-09 23:13:42.304978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.840 [2024-12-09 23:13:42.308014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.840 [2024-12-09 23:13:42.308203] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:01.840 [2024-12-09 23:13:42.308441] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:01.840 [2024-12-09 23:13:42.308642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:01.840 pt1 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:01.840 "name": "raid_bdev1", 00:32:01.840 "uuid": "b01cbeba-fd52-4bb8-bed4-a20f586cf6c7", 00:32:01.840 "strip_size_kb": 64, 00:32:01.840 "state": "configuring", 00:32:01.840 "raid_level": "raid0", 00:32:01.840 "superblock": true, 00:32:01.840 "num_base_bdevs": 2, 00:32:01.840 "num_base_bdevs_discovered": 1, 00:32:01.840 "num_base_bdevs_operational": 2, 00:32:01.840 "base_bdevs_list": [ 00:32:01.840 { 00:32:01.840 "name": "pt1", 00:32:01.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:01.840 "is_configured": true, 00:32:01.840 "data_offset": 2048, 00:32:01.840 "data_size": 63488 00:32:01.840 }, 00:32:01.840 { 00:32:01.840 "name": null, 00:32:01.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:01.840 "is_configured": false, 00:32:01.840 "data_offset": 2048, 00:32:01.840 "data_size": 63488 00:32:01.840 } 00:32:01.840 ] 00:32:01.840 }' 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:01.840 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.407 [2024-12-09 23:13:42.792546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:02.407 [2024-12-09 23:13:42.792630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:02.407 [2024-12-09 23:13:42.792656] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:02.407 [2024-12-09 23:13:42.792672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:02.407 [2024-12-09 23:13:42.793173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:02.407 [2024-12-09 23:13:42.793199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:02.407 [2024-12-09 23:13:42.793287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:02.407 [2024-12-09 23:13:42.793319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:02.407 [2024-12-09 23:13:42.793458] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:02.407 [2024-12-09 23:13:42.793474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:02.407 [2024-12-09 23:13:42.793751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:02.407 [2024-12-09 23:13:42.793896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:02.407 [2024-12-09 23:13:42.793910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:02.407 [2024-12-09 23:13:42.794119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:02.407 pt2 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:02.407 "name": "raid_bdev1", 00:32:02.407 "uuid": "b01cbeba-fd52-4bb8-bed4-a20f586cf6c7", 00:32:02.407 "strip_size_kb": 64, 00:32:02.407 "state": "online", 00:32:02.407 "raid_level": "raid0", 00:32:02.407 "superblock": true, 00:32:02.407 "num_base_bdevs": 2, 00:32:02.407 "num_base_bdevs_discovered": 2, 00:32:02.407 "num_base_bdevs_operational": 2, 00:32:02.407 "base_bdevs_list": [ 00:32:02.407 { 00:32:02.407 "name": "pt1", 00:32:02.407 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:02.407 "is_configured": true, 00:32:02.407 "data_offset": 2048, 00:32:02.407 "data_size": 63488 00:32:02.407 }, 00:32:02.407 { 00:32:02.407 "name": "pt2", 00:32:02.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:02.407 "is_configured": true, 00:32:02.407 "data_offset": 2048, 00:32:02.407 "data_size": 63488 00:32:02.407 } 00:32:02.407 ] 00:32:02.407 }' 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:02.407 23:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.665 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:02.665 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:02.665 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:02.665 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:02.665 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:02.665 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:02.665 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:02.665 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.665 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:02.665 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.666 [2024-12-09 23:13:43.256111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:02.666 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.924 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:02.924 "name": "raid_bdev1", 00:32:02.924 "aliases": [ 00:32:02.924 "b01cbeba-fd52-4bb8-bed4-a20f586cf6c7" 00:32:02.924 ], 00:32:02.924 "product_name": "Raid Volume", 00:32:02.924 "block_size": 512, 00:32:02.924 "num_blocks": 126976, 00:32:02.924 "uuid": "b01cbeba-fd52-4bb8-bed4-a20f586cf6c7", 00:32:02.924 "assigned_rate_limits": { 00:32:02.924 "rw_ios_per_sec": 0, 00:32:02.924 "rw_mbytes_per_sec": 0, 00:32:02.924 "r_mbytes_per_sec": 0, 00:32:02.924 "w_mbytes_per_sec": 0 00:32:02.924 }, 00:32:02.924 "claimed": false, 00:32:02.924 "zoned": false, 00:32:02.924 "supported_io_types": { 00:32:02.924 "read": true, 00:32:02.924 "write": true, 00:32:02.924 "unmap": true, 00:32:02.924 "flush": true, 00:32:02.924 "reset": true, 00:32:02.924 "nvme_admin": false, 00:32:02.924 "nvme_io": false, 00:32:02.924 "nvme_io_md": false, 00:32:02.924 "write_zeroes": true, 00:32:02.924 "zcopy": false, 00:32:02.924 "get_zone_info": false, 00:32:02.924 "zone_management": false, 00:32:02.924 "zone_append": false, 00:32:02.924 "compare": false, 00:32:02.924 "compare_and_write": false, 00:32:02.924 "abort": false, 00:32:02.924 "seek_hole": false, 00:32:02.924 "seek_data": false, 00:32:02.924 "copy": false, 00:32:02.924 "nvme_iov_md": false 00:32:02.924 }, 00:32:02.924 "memory_domains": [ 00:32:02.924 { 00:32:02.924 "dma_device_id": "system", 00:32:02.924 "dma_device_type": 1 00:32:02.924 }, 00:32:02.924 { 00:32:02.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:02.924 "dma_device_type": 2 00:32:02.924 }, 00:32:02.924 { 00:32:02.924 "dma_device_id": "system", 00:32:02.924 "dma_device_type": 1 00:32:02.924 }, 00:32:02.924 { 00:32:02.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:02.924 "dma_device_type": 2 00:32:02.924 } 00:32:02.924 ], 00:32:02.924 "driver_specific": { 00:32:02.924 "raid": { 00:32:02.924 "uuid": "b01cbeba-fd52-4bb8-bed4-a20f586cf6c7", 00:32:02.924 "strip_size_kb": 64, 00:32:02.924 "state": "online", 00:32:02.924 "raid_level": "raid0", 00:32:02.924 "superblock": true, 00:32:02.924 "num_base_bdevs": 2, 00:32:02.924 "num_base_bdevs_discovered": 2, 00:32:02.924 "num_base_bdevs_operational": 2, 00:32:02.924 "base_bdevs_list": [ 00:32:02.924 { 00:32:02.924 "name": "pt1", 00:32:02.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:02.925 "is_configured": true, 00:32:02.925 "data_offset": 2048, 00:32:02.925 "data_size": 63488 00:32:02.925 }, 00:32:02.925 { 00:32:02.925 "name": "pt2", 00:32:02.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:02.925 "is_configured": true, 00:32:02.925 "data_offset": 2048, 00:32:02.925 "data_size": 63488 00:32:02.925 } 00:32:02.925 ] 00:32:02.925 } 00:32:02.925 } 00:32:02.925 }' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:02.925 pt2' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:02.925 [2024-12-09 23:13:43.511760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b01cbeba-fd52-4bb8-bed4-a20f586cf6c7 '!=' b01cbeba-fd52-4bb8-bed4-a20f586cf6c7 ']' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61091 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61091 ']' 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61091 00:32:02.925 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:32:03.184 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:03.184 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61091 00:32:03.184 killing process with pid 61091 00:32:03.184 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:03.184 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:03.184 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61091' 00:32:03.184 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61091 00:32:03.184 [2024-12-09 23:13:43.586084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:03.184 [2024-12-09 23:13:43.586184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:03.184 [2024-12-09 23:13:43.586238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:03.184 23:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61091 00:32:03.184 [2024-12-09 23:13:43.586254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:03.184 [2024-12-09 23:13:43.810329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:04.560 23:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:04.560 00:32:04.560 real 0m4.719s 00:32:04.560 user 0m6.602s 00:32:04.560 sys 0m0.848s 00:32:04.561 ************************************ 00:32:04.561 END TEST raid_superblock_test 00:32:04.561 ************************************ 00:32:04.561 23:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.561 23:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.561 23:13:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:32:04.561 23:13:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:04.561 23:13:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.561 23:13:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:04.561 ************************************ 00:32:04.561 START TEST raid_read_error_test 00:32:04.561 ************************************ 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KjfaULAwNr 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61302 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61302 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61302 ']' 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:04.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:04.561 23:13:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.823 [2024-12-09 23:13:45.219379] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:04.823 [2024-12-09 23:13:45.219582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61302 ] 00:32:04.823 [2024-12-09 23:13:45.429162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.085 [2024-12-09 23:13:45.562933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.342 [2024-12-09 23:13:45.790965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:05.343 [2024-12-09 23:13:45.791031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.601 BaseBdev1_malloc 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.601 true 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.601 [2024-12-09 23:13:46.157967] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:05.601 [2024-12-09 23:13:46.158047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.601 [2024-12-09 23:13:46.158095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:05.601 [2024-12-09 23:13:46.158117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.601 [2024-12-09 23:13:46.160833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.601 [2024-12-09 23:13:46.161018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:05.601 BaseBdev1 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:05.601 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.602 BaseBdev2_malloc 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.602 true 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.602 [2024-12-09 23:13:46.226288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:05.602 [2024-12-09 23:13:46.226372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.602 [2024-12-09 23:13:46.226426] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:05.602 [2024-12-09 23:13:46.226450] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.602 [2024-12-09 23:13:46.229183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.602 [2024-12-09 23:13:46.229238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:05.602 BaseBdev2 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.602 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.602 [2024-12-09 23:13:46.234367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:05.860 [2024-12-09 23:13:46.236719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:05.860 [2024-12-09 23:13:46.237095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:05.860 [2024-12-09 23:13:46.237122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:05.860 [2024-12-09 23:13:46.237461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:05.860 [2024-12-09 23:13:46.237659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:05.860 [2024-12-09 23:13:46.237674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:05.860 [2024-12-09 23:13:46.237856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.860 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:05.861 "name": "raid_bdev1", 00:32:05.861 "uuid": "64792606-6a7a-48ba-a07b-41757c9e045b", 00:32:05.861 "strip_size_kb": 64, 00:32:05.861 "state": "online", 00:32:05.861 "raid_level": "raid0", 00:32:05.861 "superblock": true, 00:32:05.861 "num_base_bdevs": 2, 00:32:05.861 "num_base_bdevs_discovered": 2, 00:32:05.861 "num_base_bdevs_operational": 2, 00:32:05.861 "base_bdevs_list": [ 00:32:05.861 { 00:32:05.861 "name": "BaseBdev1", 00:32:05.861 "uuid": "afac56c4-957f-59b8-9fcf-a063c27c10ce", 00:32:05.861 "is_configured": true, 00:32:05.861 "data_offset": 2048, 00:32:05.861 "data_size": 63488 00:32:05.861 }, 00:32:05.861 { 00:32:05.861 "name": "BaseBdev2", 00:32:05.861 "uuid": "0f28513d-560e-57c3-b68a-e990f9ee59b1", 00:32:05.861 "is_configured": true, 00:32:05.861 "data_offset": 2048, 00:32:05.861 "data_size": 63488 00:32:05.861 } 00:32:05.861 ] 00:32:05.861 }' 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:05.861 23:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.119 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:06.119 23:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:06.378 [2024-12-09 23:13:46.807872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:07.356 "name": "raid_bdev1", 00:32:07.356 "uuid": "64792606-6a7a-48ba-a07b-41757c9e045b", 00:32:07.356 "strip_size_kb": 64, 00:32:07.356 "state": "online", 00:32:07.356 "raid_level": "raid0", 00:32:07.356 "superblock": true, 00:32:07.356 "num_base_bdevs": 2, 00:32:07.356 "num_base_bdevs_discovered": 2, 00:32:07.356 "num_base_bdevs_operational": 2, 00:32:07.356 "base_bdevs_list": [ 00:32:07.356 { 00:32:07.356 "name": "BaseBdev1", 00:32:07.356 "uuid": "afac56c4-957f-59b8-9fcf-a063c27c10ce", 00:32:07.356 "is_configured": true, 00:32:07.356 "data_offset": 2048, 00:32:07.356 "data_size": 63488 00:32:07.356 }, 00:32:07.356 { 00:32:07.356 "name": "BaseBdev2", 00:32:07.356 "uuid": "0f28513d-560e-57c3-b68a-e990f9ee59b1", 00:32:07.356 "is_configured": true, 00:32:07.356 "data_offset": 2048, 00:32:07.356 "data_size": 63488 00:32:07.356 } 00:32:07.356 ] 00:32:07.356 }' 00:32:07.356 23:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:07.357 23:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.615 [2024-12-09 23:13:48.163866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:07.615 [2024-12-09 23:13:48.164086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:07.615 [2024-12-09 23:13:48.167031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:07.615 [2024-12-09 23:13:48.167091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:07.615 [2024-12-09 23:13:48.167135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:07.615 [2024-12-09 23:13:48.167154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:07.615 { 00:32:07.615 "results": [ 00:32:07.615 { 00:32:07.615 "job": "raid_bdev1", 00:32:07.615 "core_mask": "0x1", 00:32:07.615 "workload": "randrw", 00:32:07.615 "percentage": 50, 00:32:07.615 "status": "finished", 00:32:07.615 "queue_depth": 1, 00:32:07.615 "io_size": 131072, 00:32:07.615 "runtime": 1.355956, 00:32:07.615 "iops": 14372.885255863759, 00:32:07.615 "mibps": 1796.6106569829699, 00:32:07.615 "io_failed": 1, 00:32:07.615 "io_timeout": 0, 00:32:07.615 "avg_latency_us": 96.21094767989351, 00:32:07.615 "min_latency_us": 28.37590361445783, 00:32:07.615 "max_latency_us": 1658.1397590361446 00:32:07.615 } 00:32:07.615 ], 00:32:07.615 "core_count": 1 00:32:07.615 } 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61302 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61302 ']' 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61302 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61302 00:32:07.615 killing process with pid 61302 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61302' 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61302 00:32:07.615 [2024-12-09 23:13:48.212270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:07.615 23:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61302 00:32:07.873 [2024-12-09 23:13:48.359524] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:09.249 23:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KjfaULAwNr 00:32:09.249 23:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:09.249 23:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:09.249 23:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:32:09.249 23:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:32:09.249 23:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:09.249 23:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:09.249 23:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:32:09.249 00:32:09.249 real 0m4.578s 00:32:09.249 user 0m5.450s 00:32:09.249 sys 0m0.644s 00:32:09.249 23:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.249 23:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:09.249 ************************************ 00:32:09.249 END TEST raid_read_error_test 00:32:09.249 ************************************ 00:32:09.249 23:13:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:32:09.249 23:13:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:09.249 23:13:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.249 23:13:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:09.249 ************************************ 00:32:09.249 START TEST raid_write_error_test 00:32:09.249 ************************************ 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:09.249 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5ujIYq623a 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61448 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61448 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61448 ']' 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.250 23:13:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:09.508 [2024-12-09 23:13:49.884937] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:09.508 [2024-12-09 23:13:49.885626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61448 ] 00:32:09.508 [2024-12-09 23:13:50.077441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.767 [2024-12-09 23:13:50.209340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.025 [2024-12-09 23:13:50.434335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:10.025 [2024-12-09 23:13:50.434390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:10.284 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.284 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:10.284 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:10.284 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:10.284 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.284 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.284 BaseBdev1_malloc 00:32:10.284 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.284 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:10.284 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.284 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.284 true 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.285 [2024-12-09 23:13:50.833676] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:10.285 [2024-12-09 23:13:50.833751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:10.285 [2024-12-09 23:13:50.833778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:10.285 [2024-12-09 23:13:50.833793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:10.285 [2024-12-09 23:13:50.836451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:10.285 [2024-12-09 23:13:50.836496] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:10.285 BaseBdev1 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.285 BaseBdev2_malloc 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.285 true 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.285 [2024-12-09 23:13:50.904264] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:10.285 [2024-12-09 23:13:50.904549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:10.285 [2024-12-09 23:13:50.904598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:10.285 [2024-12-09 23:13:50.904615] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:10.285 [2024-12-09 23:13:50.907606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:10.285 [2024-12-09 23:13:50.907806] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:10.285 BaseBdev2 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.285 [2024-12-09 23:13:50.912505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:10.285 [2024-12-09 23:13:50.914843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:10.285 [2024-12-09 23:13:50.915046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:10.285 [2024-12-09 23:13:50.915067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:10.285 [2024-12-09 23:13:50.915353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:10.285 [2024-12-09 23:13:50.915566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:10.285 [2024-12-09 23:13:50.915582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:10.285 [2024-12-09 23:13:50.915745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:10.285 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:10.544 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:10.544 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.544 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.544 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.544 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.544 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:10.544 "name": "raid_bdev1", 00:32:10.544 "uuid": "1de56edc-050b-4898-a064-5a6dfdf2ab5e", 00:32:10.544 "strip_size_kb": 64, 00:32:10.544 "state": "online", 00:32:10.544 "raid_level": "raid0", 00:32:10.544 "superblock": true, 00:32:10.544 "num_base_bdevs": 2, 00:32:10.544 "num_base_bdevs_discovered": 2, 00:32:10.544 "num_base_bdevs_operational": 2, 00:32:10.544 "base_bdevs_list": [ 00:32:10.544 { 00:32:10.544 "name": "BaseBdev1", 00:32:10.544 "uuid": "f24324a8-57d3-556d-8712-82cac8471b44", 00:32:10.544 "is_configured": true, 00:32:10.544 "data_offset": 2048, 00:32:10.544 "data_size": 63488 00:32:10.544 }, 00:32:10.544 { 00:32:10.544 "name": "BaseBdev2", 00:32:10.544 "uuid": "296d923d-da01-5253-8b39-356529273b58", 00:32:10.544 "is_configured": true, 00:32:10.544 "data_offset": 2048, 00:32:10.544 "data_size": 63488 00:32:10.544 } 00:32:10.544 ] 00:32:10.544 }' 00:32:10.544 23:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:10.544 23:13:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.803 23:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:10.803 23:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:11.062 [2024-12-09 23:13:51.481093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:11.999 "name": "raid_bdev1", 00:32:11.999 "uuid": "1de56edc-050b-4898-a064-5a6dfdf2ab5e", 00:32:11.999 "strip_size_kb": 64, 00:32:11.999 "state": "online", 00:32:11.999 "raid_level": "raid0", 00:32:11.999 "superblock": true, 00:32:11.999 "num_base_bdevs": 2, 00:32:11.999 "num_base_bdevs_discovered": 2, 00:32:11.999 "num_base_bdevs_operational": 2, 00:32:11.999 "base_bdevs_list": [ 00:32:11.999 { 00:32:11.999 "name": "BaseBdev1", 00:32:11.999 "uuid": "f24324a8-57d3-556d-8712-82cac8471b44", 00:32:11.999 "is_configured": true, 00:32:11.999 "data_offset": 2048, 00:32:11.999 "data_size": 63488 00:32:11.999 }, 00:32:11.999 { 00:32:11.999 "name": "BaseBdev2", 00:32:11.999 "uuid": "296d923d-da01-5253-8b39-356529273b58", 00:32:11.999 "is_configured": true, 00:32:11.999 "data_offset": 2048, 00:32:11.999 "data_size": 63488 00:32:11.999 } 00:32:11.999 ] 00:32:11.999 }' 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:11.999 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:12.258 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:12.258 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.258 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:12.258 [2024-12-09 23:13:52.846295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:12.258 [2024-12-09 23:13:52.846340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:12.258 [2024-12-09 23:13:52.849609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:12.258 [2024-12-09 23:13:52.849821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:12.258 [2024-12-09 23:13:52.849906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:12.258 [2024-12-09 23:13:52.850028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:12.258 { 00:32:12.258 "results": [ 00:32:12.258 { 00:32:12.258 "job": "raid_bdev1", 00:32:12.258 "core_mask": "0x1", 00:32:12.258 "workload": "randrw", 00:32:12.258 "percentage": 50, 00:32:12.258 "status": "finished", 00:32:12.258 "queue_depth": 1, 00:32:12.258 "io_size": 131072, 00:32:12.258 "runtime": 1.365277, 00:32:12.258 "iops": 14545.03371843223, 00:32:12.258 "mibps": 1818.1292148040288, 00:32:12.258 "io_failed": 1, 00:32:12.258 "io_timeout": 0, 00:32:12.258 "avg_latency_us": 94.80996851093381, 00:32:12.258 "min_latency_us": 27.759036144578314, 00:32:12.258 "max_latency_us": 1566.0208835341366 00:32:12.258 } 00:32:12.258 ], 00:32:12.258 "core_count": 1 00:32:12.258 } 00:32:12.258 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.258 23:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61448 00:32:12.258 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61448 ']' 00:32:12.258 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61448 00:32:12.258 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:32:12.258 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:12.258 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61448 00:32:12.518 killing process with pid 61448 00:32:12.518 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:12.518 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:12.518 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61448' 00:32:12.518 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61448 00:32:12.518 [2024-12-09 23:13:52.904313] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:12.518 23:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61448 00:32:12.518 [2024-12-09 23:13:53.054074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:13.899 23:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:13.899 23:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5ujIYq623a 00:32:13.899 23:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:13.899 23:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:32:13.899 23:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:32:13.899 23:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:13.899 23:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:13.899 ************************************ 00:32:13.899 END TEST raid_write_error_test 00:32:13.899 ************************************ 00:32:13.899 23:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:32:13.899 00:32:13.899 real 0m4.613s 00:32:13.899 user 0m5.567s 00:32:13.899 sys 0m0.612s 00:32:13.899 23:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.899 23:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:13.899 23:13:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:32:13.899 23:13:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:32:13.899 23:13:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:13.899 23:13:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.899 23:13:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:13.899 ************************************ 00:32:13.899 START TEST raid_state_function_test 00:32:13.899 ************************************ 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61592 00:32:13.899 Process raid pid: 61592 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61592' 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61592 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61592 ']' 00:32:13.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.899 23:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.900 23:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.900 23:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.900 23:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:14.158 [2024-12-09 23:13:54.570213] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:14.158 [2024-12-09 23:13:54.570346] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.158 [2024-12-09 23:13:54.746636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.417 [2024-12-09 23:13:54.883828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.674 [2024-12-09 23:13:55.117821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:14.674 [2024-12-09 23:13:55.117870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:14.932 [2024-12-09 23:13:55.454618] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:14.932 [2024-12-09 23:13:55.454829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:14.932 [2024-12-09 23:13:55.454855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:14.932 [2024-12-09 23:13:55.454871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.932 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:14.932 "name": "Existed_Raid", 00:32:14.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.932 "strip_size_kb": 64, 00:32:14.932 "state": "configuring", 00:32:14.932 "raid_level": "concat", 00:32:14.932 "superblock": false, 00:32:14.932 "num_base_bdevs": 2, 00:32:14.932 "num_base_bdevs_discovered": 0, 00:32:14.932 "num_base_bdevs_operational": 2, 00:32:14.932 "base_bdevs_list": [ 00:32:14.932 { 00:32:14.932 "name": "BaseBdev1", 00:32:14.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.932 "is_configured": false, 00:32:14.933 "data_offset": 0, 00:32:14.933 "data_size": 0 00:32:14.933 }, 00:32:14.933 { 00:32:14.933 "name": "BaseBdev2", 00:32:14.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.933 "is_configured": false, 00:32:14.933 "data_offset": 0, 00:32:14.933 "data_size": 0 00:32:14.933 } 00:32:14.933 ] 00:32:14.933 }' 00:32:14.933 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:14.933 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.505 [2024-12-09 23:13:55.918584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:15.505 [2024-12-09 23:13:55.918630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.505 [2024-12-09 23:13:55.930604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:15.505 [2024-12-09 23:13:55.930667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:15.505 [2024-12-09 23:13:55.930684] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:15.505 [2024-12-09 23:13:55.930707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.505 [2024-12-09 23:13:55.983532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:15.505 BaseBdev1 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.505 23:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.505 [ 00:32:15.505 { 00:32:15.505 "name": "BaseBdev1", 00:32:15.505 "aliases": [ 00:32:15.505 "bdb06c79-af34-41c3-ab77-2874c559408b" 00:32:15.505 ], 00:32:15.505 "product_name": "Malloc disk", 00:32:15.505 "block_size": 512, 00:32:15.505 "num_blocks": 65536, 00:32:15.505 "uuid": "bdb06c79-af34-41c3-ab77-2874c559408b", 00:32:15.505 "assigned_rate_limits": { 00:32:15.505 "rw_ios_per_sec": 0, 00:32:15.505 "rw_mbytes_per_sec": 0, 00:32:15.505 "r_mbytes_per_sec": 0, 00:32:15.505 "w_mbytes_per_sec": 0 00:32:15.505 }, 00:32:15.505 "claimed": true, 00:32:15.505 "claim_type": "exclusive_write", 00:32:15.505 "zoned": false, 00:32:15.505 "supported_io_types": { 00:32:15.505 "read": true, 00:32:15.505 "write": true, 00:32:15.505 "unmap": true, 00:32:15.505 "flush": true, 00:32:15.505 "reset": true, 00:32:15.505 "nvme_admin": false, 00:32:15.505 "nvme_io": false, 00:32:15.505 "nvme_io_md": false, 00:32:15.505 "write_zeroes": true, 00:32:15.505 "zcopy": true, 00:32:15.505 "get_zone_info": false, 00:32:15.505 "zone_management": false, 00:32:15.505 "zone_append": false, 00:32:15.505 "compare": false, 00:32:15.505 "compare_and_write": false, 00:32:15.505 "abort": true, 00:32:15.505 "seek_hole": false, 00:32:15.505 "seek_data": false, 00:32:15.505 "copy": true, 00:32:15.505 "nvme_iov_md": false 00:32:15.505 }, 00:32:15.505 "memory_domains": [ 00:32:15.505 { 00:32:15.505 "dma_device_id": "system", 00:32:15.505 "dma_device_type": 1 00:32:15.505 }, 00:32:15.505 { 00:32:15.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:15.505 "dma_device_type": 2 00:32:15.505 } 00:32:15.505 ], 00:32:15.505 "driver_specific": {} 00:32:15.505 } 00:32:15.505 ] 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:15.505 "name": "Existed_Raid", 00:32:15.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:15.505 "strip_size_kb": 64, 00:32:15.505 "state": "configuring", 00:32:15.505 "raid_level": "concat", 00:32:15.505 "superblock": false, 00:32:15.505 "num_base_bdevs": 2, 00:32:15.505 "num_base_bdevs_discovered": 1, 00:32:15.505 "num_base_bdevs_operational": 2, 00:32:15.505 "base_bdevs_list": [ 00:32:15.505 { 00:32:15.505 "name": "BaseBdev1", 00:32:15.505 "uuid": "bdb06c79-af34-41c3-ab77-2874c559408b", 00:32:15.505 "is_configured": true, 00:32:15.505 "data_offset": 0, 00:32:15.505 "data_size": 65536 00:32:15.505 }, 00:32:15.505 { 00:32:15.505 "name": "BaseBdev2", 00:32:15.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:15.505 "is_configured": false, 00:32:15.505 "data_offset": 0, 00:32:15.505 "data_size": 0 00:32:15.505 } 00:32:15.505 ] 00:32:15.505 }' 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:15.505 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.074 [2024-12-09 23:13:56.474889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:16.074 [2024-12-09 23:13:56.475083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.074 [2024-12-09 23:13:56.486927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:16.074 [2024-12-09 23:13:56.489179] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:16.074 [2024-12-09 23:13:56.489227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:16.074 "name": "Existed_Raid", 00:32:16.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.074 "strip_size_kb": 64, 00:32:16.074 "state": "configuring", 00:32:16.074 "raid_level": "concat", 00:32:16.074 "superblock": false, 00:32:16.074 "num_base_bdevs": 2, 00:32:16.074 "num_base_bdevs_discovered": 1, 00:32:16.074 "num_base_bdevs_operational": 2, 00:32:16.074 "base_bdevs_list": [ 00:32:16.074 { 00:32:16.074 "name": "BaseBdev1", 00:32:16.074 "uuid": "bdb06c79-af34-41c3-ab77-2874c559408b", 00:32:16.074 "is_configured": true, 00:32:16.074 "data_offset": 0, 00:32:16.074 "data_size": 65536 00:32:16.074 }, 00:32:16.074 { 00:32:16.074 "name": "BaseBdev2", 00:32:16.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.074 "is_configured": false, 00:32:16.074 "data_offset": 0, 00:32:16.074 "data_size": 0 00:32:16.074 } 00:32:16.074 ] 00:32:16.074 }' 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:16.074 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.333 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:16.333 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.333 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.593 [2024-12-09 23:13:56.976873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:16.593 [2024-12-09 23:13:56.976936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:16.593 [2024-12-09 23:13:56.976946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:16.593 [2024-12-09 23:13:56.977333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:16.593 [2024-12-09 23:13:56.977563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:16.593 [2024-12-09 23:13:56.977581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:16.593 [2024-12-09 23:13:56.977900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:16.593 BaseBdev2 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.593 23:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.593 [ 00:32:16.593 { 00:32:16.593 "name": "BaseBdev2", 00:32:16.593 "aliases": [ 00:32:16.593 "88581eb4-4f00-441b-b2b3-501543459161" 00:32:16.593 ], 00:32:16.593 "product_name": "Malloc disk", 00:32:16.593 "block_size": 512, 00:32:16.593 "num_blocks": 65536, 00:32:16.593 "uuid": "88581eb4-4f00-441b-b2b3-501543459161", 00:32:16.593 "assigned_rate_limits": { 00:32:16.593 "rw_ios_per_sec": 0, 00:32:16.593 "rw_mbytes_per_sec": 0, 00:32:16.593 "r_mbytes_per_sec": 0, 00:32:16.593 "w_mbytes_per_sec": 0 00:32:16.593 }, 00:32:16.593 "claimed": true, 00:32:16.593 "claim_type": "exclusive_write", 00:32:16.593 "zoned": false, 00:32:16.593 "supported_io_types": { 00:32:16.593 "read": true, 00:32:16.593 "write": true, 00:32:16.593 "unmap": true, 00:32:16.593 "flush": true, 00:32:16.593 "reset": true, 00:32:16.593 "nvme_admin": false, 00:32:16.593 "nvme_io": false, 00:32:16.593 "nvme_io_md": false, 00:32:16.593 "write_zeroes": true, 00:32:16.593 "zcopy": true, 00:32:16.593 "get_zone_info": false, 00:32:16.593 "zone_management": false, 00:32:16.593 "zone_append": false, 00:32:16.593 "compare": false, 00:32:16.593 "compare_and_write": false, 00:32:16.593 "abort": true, 00:32:16.593 "seek_hole": false, 00:32:16.593 "seek_data": false, 00:32:16.593 "copy": true, 00:32:16.593 "nvme_iov_md": false 00:32:16.593 }, 00:32:16.593 "memory_domains": [ 00:32:16.593 { 00:32:16.593 "dma_device_id": "system", 00:32:16.593 "dma_device_type": 1 00:32:16.593 }, 00:32:16.593 { 00:32:16.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.593 "dma_device_type": 2 00:32:16.593 } 00:32:16.593 ], 00:32:16.593 "driver_specific": {} 00:32:16.593 } 00:32:16.593 ] 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:16.593 "name": "Existed_Raid", 00:32:16.593 "uuid": "8d400d1f-c4bc-4033-84a5-cdf69187b16a", 00:32:16.593 "strip_size_kb": 64, 00:32:16.593 "state": "online", 00:32:16.593 "raid_level": "concat", 00:32:16.593 "superblock": false, 00:32:16.593 "num_base_bdevs": 2, 00:32:16.593 "num_base_bdevs_discovered": 2, 00:32:16.593 "num_base_bdevs_operational": 2, 00:32:16.593 "base_bdevs_list": [ 00:32:16.593 { 00:32:16.593 "name": "BaseBdev1", 00:32:16.593 "uuid": "bdb06c79-af34-41c3-ab77-2874c559408b", 00:32:16.593 "is_configured": true, 00:32:16.593 "data_offset": 0, 00:32:16.593 "data_size": 65536 00:32:16.593 }, 00:32:16.593 { 00:32:16.593 "name": "BaseBdev2", 00:32:16.593 "uuid": "88581eb4-4f00-441b-b2b3-501543459161", 00:32:16.593 "is_configured": true, 00:32:16.593 "data_offset": 0, 00:32:16.593 "data_size": 65536 00:32:16.593 } 00:32:16.593 ] 00:32:16.593 }' 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:16.593 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.853 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:16.853 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:16.853 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:16.853 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:16.853 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:16.853 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:16.853 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:16.853 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.853 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:16.853 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.853 [2024-12-09 23:13:57.476565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:17.112 "name": "Existed_Raid", 00:32:17.112 "aliases": [ 00:32:17.112 "8d400d1f-c4bc-4033-84a5-cdf69187b16a" 00:32:17.112 ], 00:32:17.112 "product_name": "Raid Volume", 00:32:17.112 "block_size": 512, 00:32:17.112 "num_blocks": 131072, 00:32:17.112 "uuid": "8d400d1f-c4bc-4033-84a5-cdf69187b16a", 00:32:17.112 "assigned_rate_limits": { 00:32:17.112 "rw_ios_per_sec": 0, 00:32:17.112 "rw_mbytes_per_sec": 0, 00:32:17.112 "r_mbytes_per_sec": 0, 00:32:17.112 "w_mbytes_per_sec": 0 00:32:17.112 }, 00:32:17.112 "claimed": false, 00:32:17.112 "zoned": false, 00:32:17.112 "supported_io_types": { 00:32:17.112 "read": true, 00:32:17.112 "write": true, 00:32:17.112 "unmap": true, 00:32:17.112 "flush": true, 00:32:17.112 "reset": true, 00:32:17.112 "nvme_admin": false, 00:32:17.112 "nvme_io": false, 00:32:17.112 "nvme_io_md": false, 00:32:17.112 "write_zeroes": true, 00:32:17.112 "zcopy": false, 00:32:17.112 "get_zone_info": false, 00:32:17.112 "zone_management": false, 00:32:17.112 "zone_append": false, 00:32:17.112 "compare": false, 00:32:17.112 "compare_and_write": false, 00:32:17.112 "abort": false, 00:32:17.112 "seek_hole": false, 00:32:17.112 "seek_data": false, 00:32:17.112 "copy": false, 00:32:17.112 "nvme_iov_md": false 00:32:17.112 }, 00:32:17.112 "memory_domains": [ 00:32:17.112 { 00:32:17.112 "dma_device_id": "system", 00:32:17.112 "dma_device_type": 1 00:32:17.112 }, 00:32:17.112 { 00:32:17.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.112 "dma_device_type": 2 00:32:17.112 }, 00:32:17.112 { 00:32:17.112 "dma_device_id": "system", 00:32:17.112 "dma_device_type": 1 00:32:17.112 }, 00:32:17.112 { 00:32:17.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.112 "dma_device_type": 2 00:32:17.112 } 00:32:17.112 ], 00:32:17.112 "driver_specific": { 00:32:17.112 "raid": { 00:32:17.112 "uuid": "8d400d1f-c4bc-4033-84a5-cdf69187b16a", 00:32:17.112 "strip_size_kb": 64, 00:32:17.112 "state": "online", 00:32:17.112 "raid_level": "concat", 00:32:17.112 "superblock": false, 00:32:17.112 "num_base_bdevs": 2, 00:32:17.112 "num_base_bdevs_discovered": 2, 00:32:17.112 "num_base_bdevs_operational": 2, 00:32:17.112 "base_bdevs_list": [ 00:32:17.112 { 00:32:17.112 "name": "BaseBdev1", 00:32:17.112 "uuid": "bdb06c79-af34-41c3-ab77-2874c559408b", 00:32:17.112 "is_configured": true, 00:32:17.112 "data_offset": 0, 00:32:17.112 "data_size": 65536 00:32:17.112 }, 00:32:17.112 { 00:32:17.112 "name": "BaseBdev2", 00:32:17.112 "uuid": "88581eb4-4f00-441b-b2b3-501543459161", 00:32:17.112 "is_configured": true, 00:32:17.112 "data_offset": 0, 00:32:17.112 "data_size": 65536 00:32:17.112 } 00:32:17.112 ] 00:32:17.112 } 00:32:17.112 } 00:32:17.112 }' 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:17.112 BaseBdev2' 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.112 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.112 [2024-12-09 23:13:57.711984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:17.112 [2024-12-09 23:13:57.712027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:17.112 [2024-12-09 23:13:57.712085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:17.371 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:17.372 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:17.372 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.372 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.372 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:17.372 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.372 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:17.372 "name": "Existed_Raid", 00:32:17.372 "uuid": "8d400d1f-c4bc-4033-84a5-cdf69187b16a", 00:32:17.372 "strip_size_kb": 64, 00:32:17.372 "state": "offline", 00:32:17.372 "raid_level": "concat", 00:32:17.372 "superblock": false, 00:32:17.372 "num_base_bdevs": 2, 00:32:17.372 "num_base_bdevs_discovered": 1, 00:32:17.372 "num_base_bdevs_operational": 1, 00:32:17.372 "base_bdevs_list": [ 00:32:17.372 { 00:32:17.372 "name": null, 00:32:17.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.372 "is_configured": false, 00:32:17.372 "data_offset": 0, 00:32:17.372 "data_size": 65536 00:32:17.372 }, 00:32:17.372 { 00:32:17.372 "name": "BaseBdev2", 00:32:17.372 "uuid": "88581eb4-4f00-441b-b2b3-501543459161", 00:32:17.372 "is_configured": true, 00:32:17.372 "data_offset": 0, 00:32:17.372 "data_size": 65536 00:32:17.372 } 00:32:17.372 ] 00:32:17.372 }' 00:32:17.372 23:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:17.372 23:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.939 [2024-12-09 23:13:58.323816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:17.939 [2024-12-09 23:13:58.323878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61592 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61592 ']' 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61592 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61592 00:32:17.939 killing process with pid 61592 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61592' 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61592 00:32:17.939 [2024-12-09 23:13:58.512411] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:17.939 23:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61592 00:32:17.939 [2024-12-09 23:13:58.530227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:32:19.320 00:32:19.320 real 0m5.312s 00:32:19.320 user 0m7.573s 00:32:19.320 sys 0m1.001s 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.320 ************************************ 00:32:19.320 END TEST raid_state_function_test 00:32:19.320 ************************************ 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.320 23:13:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:32:19.320 23:13:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:19.320 23:13:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.320 23:13:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:19.320 ************************************ 00:32:19.320 START TEST raid_state_function_test_sb 00:32:19.320 ************************************ 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61845 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:19.320 Process raid pid: 61845 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61845' 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61845 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61845 ']' 00:32:19.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:19.320 23:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.320 [2024-12-09 23:13:59.950658] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:19.320 [2024-12-09 23:13:59.950785] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.579 [2024-12-09 23:14:00.140565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.838 [2024-12-09 23:14:00.274661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.097 [2024-12-09 23:14:00.510668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:20.097 [2024-12-09 23:14:00.510968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.357 [2024-12-09 23:14:00.905898] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:20.357 [2024-12-09 23:14:00.905970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:20.357 [2024-12-09 23:14:00.905982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:20.357 [2024-12-09 23:14:00.905997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:20.357 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:20.358 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.358 23:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.358 23:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.358 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:20.358 23:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.358 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:20.358 "name": "Existed_Raid", 00:32:20.358 "uuid": "615569ae-8d97-429a-88cc-6e1305d4dc64", 00:32:20.358 "strip_size_kb": 64, 00:32:20.358 "state": "configuring", 00:32:20.358 "raid_level": "concat", 00:32:20.358 "superblock": true, 00:32:20.358 "num_base_bdevs": 2, 00:32:20.358 "num_base_bdevs_discovered": 0, 00:32:20.358 "num_base_bdevs_operational": 2, 00:32:20.358 "base_bdevs_list": [ 00:32:20.358 { 00:32:20.358 "name": "BaseBdev1", 00:32:20.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.358 "is_configured": false, 00:32:20.358 "data_offset": 0, 00:32:20.358 "data_size": 0 00:32:20.358 }, 00:32:20.358 { 00:32:20.358 "name": "BaseBdev2", 00:32:20.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.358 "is_configured": false, 00:32:20.358 "data_offset": 0, 00:32:20.358 "data_size": 0 00:32:20.358 } 00:32:20.358 ] 00:32:20.358 }' 00:32:20.358 23:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:20.358 23:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.933 [2024-12-09 23:14:01.341255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:20.933 [2024-12-09 23:14:01.341297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.933 [2024-12-09 23:14:01.353251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:20.933 [2024-12-09 23:14:01.353306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:20.933 [2024-12-09 23:14:01.353318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:20.933 [2024-12-09 23:14:01.353335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.933 [2024-12-09 23:14:01.406819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:20.933 BaseBdev1 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.933 [ 00:32:20.933 { 00:32:20.933 "name": "BaseBdev1", 00:32:20.933 "aliases": [ 00:32:20.933 "50354f29-3d53-4627-bfd0-5beda7b22005" 00:32:20.933 ], 00:32:20.933 "product_name": "Malloc disk", 00:32:20.933 "block_size": 512, 00:32:20.933 "num_blocks": 65536, 00:32:20.933 "uuid": "50354f29-3d53-4627-bfd0-5beda7b22005", 00:32:20.933 "assigned_rate_limits": { 00:32:20.933 "rw_ios_per_sec": 0, 00:32:20.933 "rw_mbytes_per_sec": 0, 00:32:20.933 "r_mbytes_per_sec": 0, 00:32:20.933 "w_mbytes_per_sec": 0 00:32:20.933 }, 00:32:20.933 "claimed": true, 00:32:20.933 "claim_type": "exclusive_write", 00:32:20.933 "zoned": false, 00:32:20.933 "supported_io_types": { 00:32:20.933 "read": true, 00:32:20.933 "write": true, 00:32:20.933 "unmap": true, 00:32:20.933 "flush": true, 00:32:20.933 "reset": true, 00:32:20.933 "nvme_admin": false, 00:32:20.933 "nvme_io": false, 00:32:20.933 "nvme_io_md": false, 00:32:20.933 "write_zeroes": true, 00:32:20.933 "zcopy": true, 00:32:20.933 "get_zone_info": false, 00:32:20.933 "zone_management": false, 00:32:20.933 "zone_append": false, 00:32:20.933 "compare": false, 00:32:20.933 "compare_and_write": false, 00:32:20.933 "abort": true, 00:32:20.933 "seek_hole": false, 00:32:20.933 "seek_data": false, 00:32:20.933 "copy": true, 00:32:20.933 "nvme_iov_md": false 00:32:20.933 }, 00:32:20.933 "memory_domains": [ 00:32:20.933 { 00:32:20.933 "dma_device_id": "system", 00:32:20.933 "dma_device_type": 1 00:32:20.933 }, 00:32:20.933 { 00:32:20.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:20.933 "dma_device_type": 2 00:32:20.933 } 00:32:20.933 ], 00:32:20.933 "driver_specific": {} 00:32:20.933 } 00:32:20.933 ] 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:20.933 "name": "Existed_Raid", 00:32:20.933 "uuid": "b55e998c-bb6e-42be-83b0-054c60217392", 00:32:20.933 "strip_size_kb": 64, 00:32:20.933 "state": "configuring", 00:32:20.933 "raid_level": "concat", 00:32:20.933 "superblock": true, 00:32:20.933 "num_base_bdevs": 2, 00:32:20.933 "num_base_bdevs_discovered": 1, 00:32:20.933 "num_base_bdevs_operational": 2, 00:32:20.933 "base_bdevs_list": [ 00:32:20.933 { 00:32:20.933 "name": "BaseBdev1", 00:32:20.933 "uuid": "50354f29-3d53-4627-bfd0-5beda7b22005", 00:32:20.933 "is_configured": true, 00:32:20.933 "data_offset": 2048, 00:32:20.933 "data_size": 63488 00:32:20.933 }, 00:32:20.933 { 00:32:20.933 "name": "BaseBdev2", 00:32:20.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.933 "is_configured": false, 00:32:20.933 "data_offset": 0, 00:32:20.933 "data_size": 0 00:32:20.933 } 00:32:20.933 ] 00:32:20.933 }' 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:20.933 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.501 [2024-12-09 23:14:01.890325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:21.501 [2024-12-09 23:14:01.890387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.501 [2024-12-09 23:14:01.902420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:21.501 [2024-12-09 23:14:01.904645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:21.501 [2024-12-09 23:14:01.904721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:21.501 "name": "Existed_Raid", 00:32:21.501 "uuid": "654e003f-b91f-420f-8301-36eee92f0f9b", 00:32:21.501 "strip_size_kb": 64, 00:32:21.501 "state": "configuring", 00:32:21.501 "raid_level": "concat", 00:32:21.501 "superblock": true, 00:32:21.501 "num_base_bdevs": 2, 00:32:21.501 "num_base_bdevs_discovered": 1, 00:32:21.501 "num_base_bdevs_operational": 2, 00:32:21.501 "base_bdevs_list": [ 00:32:21.501 { 00:32:21.501 "name": "BaseBdev1", 00:32:21.501 "uuid": "50354f29-3d53-4627-bfd0-5beda7b22005", 00:32:21.501 "is_configured": true, 00:32:21.501 "data_offset": 2048, 00:32:21.501 "data_size": 63488 00:32:21.501 }, 00:32:21.501 { 00:32:21.501 "name": "BaseBdev2", 00:32:21.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.501 "is_configured": false, 00:32:21.501 "data_offset": 0, 00:32:21.501 "data_size": 0 00:32:21.501 } 00:32:21.501 ] 00:32:21.501 }' 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:21.501 23:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.760 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:21.760 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.760 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.019 [2024-12-09 23:14:02.401833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:22.019 [2024-12-09 23:14:02.402162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:22.019 [2024-12-09 23:14:02.402183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:22.019 [2024-12-09 23:14:02.402490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:22.019 [2024-12-09 23:14:02.402657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:22.019 [2024-12-09 23:14:02.402678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:22.019 BaseBdev2 00:32:22.019 [2024-12-09 23:14:02.402854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.019 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.020 [ 00:32:22.020 { 00:32:22.020 "name": "BaseBdev2", 00:32:22.020 "aliases": [ 00:32:22.020 "85c22e9c-a620-4534-9f0d-9d2bdf7d9556" 00:32:22.020 ], 00:32:22.020 "product_name": "Malloc disk", 00:32:22.020 "block_size": 512, 00:32:22.020 "num_blocks": 65536, 00:32:22.020 "uuid": "85c22e9c-a620-4534-9f0d-9d2bdf7d9556", 00:32:22.020 "assigned_rate_limits": { 00:32:22.020 "rw_ios_per_sec": 0, 00:32:22.020 "rw_mbytes_per_sec": 0, 00:32:22.020 "r_mbytes_per_sec": 0, 00:32:22.020 "w_mbytes_per_sec": 0 00:32:22.020 }, 00:32:22.020 "claimed": true, 00:32:22.020 "claim_type": "exclusive_write", 00:32:22.020 "zoned": false, 00:32:22.020 "supported_io_types": { 00:32:22.020 "read": true, 00:32:22.020 "write": true, 00:32:22.020 "unmap": true, 00:32:22.020 "flush": true, 00:32:22.020 "reset": true, 00:32:22.020 "nvme_admin": false, 00:32:22.020 "nvme_io": false, 00:32:22.020 "nvme_io_md": false, 00:32:22.020 "write_zeroes": true, 00:32:22.020 "zcopy": true, 00:32:22.020 "get_zone_info": false, 00:32:22.020 "zone_management": false, 00:32:22.020 "zone_append": false, 00:32:22.020 "compare": false, 00:32:22.020 "compare_and_write": false, 00:32:22.020 "abort": true, 00:32:22.020 "seek_hole": false, 00:32:22.020 "seek_data": false, 00:32:22.020 "copy": true, 00:32:22.020 "nvme_iov_md": false 00:32:22.020 }, 00:32:22.020 "memory_domains": [ 00:32:22.020 { 00:32:22.020 "dma_device_id": "system", 00:32:22.020 "dma_device_type": 1 00:32:22.020 }, 00:32:22.020 { 00:32:22.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.020 "dma_device_type": 2 00:32:22.020 } 00:32:22.020 ], 00:32:22.020 "driver_specific": {} 00:32:22.020 } 00:32:22.020 ] 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:22.020 "name": "Existed_Raid", 00:32:22.020 "uuid": "654e003f-b91f-420f-8301-36eee92f0f9b", 00:32:22.020 "strip_size_kb": 64, 00:32:22.020 "state": "online", 00:32:22.020 "raid_level": "concat", 00:32:22.020 "superblock": true, 00:32:22.020 "num_base_bdevs": 2, 00:32:22.020 "num_base_bdevs_discovered": 2, 00:32:22.020 "num_base_bdevs_operational": 2, 00:32:22.020 "base_bdevs_list": [ 00:32:22.020 { 00:32:22.020 "name": "BaseBdev1", 00:32:22.020 "uuid": "50354f29-3d53-4627-bfd0-5beda7b22005", 00:32:22.020 "is_configured": true, 00:32:22.020 "data_offset": 2048, 00:32:22.020 "data_size": 63488 00:32:22.020 }, 00:32:22.020 { 00:32:22.020 "name": "BaseBdev2", 00:32:22.020 "uuid": "85c22e9c-a620-4534-9f0d-9d2bdf7d9556", 00:32:22.020 "is_configured": true, 00:32:22.020 "data_offset": 2048, 00:32:22.020 "data_size": 63488 00:32:22.020 } 00:32:22.020 ] 00:32:22.020 }' 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:22.020 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.278 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:22.278 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:22.278 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:22.278 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:22.278 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:22.278 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:22.278 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:22.278 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.278 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.278 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:22.278 [2024-12-09 23:14:02.901504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:22.537 23:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.537 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:22.537 "name": "Existed_Raid", 00:32:22.537 "aliases": [ 00:32:22.537 "654e003f-b91f-420f-8301-36eee92f0f9b" 00:32:22.537 ], 00:32:22.537 "product_name": "Raid Volume", 00:32:22.537 "block_size": 512, 00:32:22.537 "num_blocks": 126976, 00:32:22.537 "uuid": "654e003f-b91f-420f-8301-36eee92f0f9b", 00:32:22.537 "assigned_rate_limits": { 00:32:22.537 "rw_ios_per_sec": 0, 00:32:22.537 "rw_mbytes_per_sec": 0, 00:32:22.537 "r_mbytes_per_sec": 0, 00:32:22.537 "w_mbytes_per_sec": 0 00:32:22.537 }, 00:32:22.537 "claimed": false, 00:32:22.537 "zoned": false, 00:32:22.537 "supported_io_types": { 00:32:22.537 "read": true, 00:32:22.537 "write": true, 00:32:22.537 "unmap": true, 00:32:22.537 "flush": true, 00:32:22.537 "reset": true, 00:32:22.537 "nvme_admin": false, 00:32:22.537 "nvme_io": false, 00:32:22.537 "nvme_io_md": false, 00:32:22.537 "write_zeroes": true, 00:32:22.537 "zcopy": false, 00:32:22.537 "get_zone_info": false, 00:32:22.537 "zone_management": false, 00:32:22.537 "zone_append": false, 00:32:22.537 "compare": false, 00:32:22.537 "compare_and_write": false, 00:32:22.537 "abort": false, 00:32:22.537 "seek_hole": false, 00:32:22.537 "seek_data": false, 00:32:22.537 "copy": false, 00:32:22.537 "nvme_iov_md": false 00:32:22.537 }, 00:32:22.537 "memory_domains": [ 00:32:22.537 { 00:32:22.537 "dma_device_id": "system", 00:32:22.537 "dma_device_type": 1 00:32:22.537 }, 00:32:22.537 { 00:32:22.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.537 "dma_device_type": 2 00:32:22.537 }, 00:32:22.537 { 00:32:22.537 "dma_device_id": "system", 00:32:22.537 "dma_device_type": 1 00:32:22.537 }, 00:32:22.537 { 00:32:22.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.537 "dma_device_type": 2 00:32:22.537 } 00:32:22.537 ], 00:32:22.537 "driver_specific": { 00:32:22.537 "raid": { 00:32:22.537 "uuid": "654e003f-b91f-420f-8301-36eee92f0f9b", 00:32:22.537 "strip_size_kb": 64, 00:32:22.537 "state": "online", 00:32:22.537 "raid_level": "concat", 00:32:22.537 "superblock": true, 00:32:22.537 "num_base_bdevs": 2, 00:32:22.537 "num_base_bdevs_discovered": 2, 00:32:22.537 "num_base_bdevs_operational": 2, 00:32:22.537 "base_bdevs_list": [ 00:32:22.537 { 00:32:22.537 "name": "BaseBdev1", 00:32:22.537 "uuid": "50354f29-3d53-4627-bfd0-5beda7b22005", 00:32:22.537 "is_configured": true, 00:32:22.537 "data_offset": 2048, 00:32:22.537 "data_size": 63488 00:32:22.537 }, 00:32:22.537 { 00:32:22.537 "name": "BaseBdev2", 00:32:22.538 "uuid": "85c22e9c-a620-4534-9f0d-9d2bdf7d9556", 00:32:22.538 "is_configured": true, 00:32:22.538 "data_offset": 2048, 00:32:22.538 "data_size": 63488 00:32:22.538 } 00:32:22.538 ] 00:32:22.538 } 00:32:22.538 } 00:32:22.538 }' 00:32:22.538 23:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:22.538 BaseBdev2' 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.538 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.538 [2024-12-09 23:14:03.152942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:22.538 [2024-12-09 23:14:03.152984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:22.538 [2024-12-09 23:14:03.153039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:22.796 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:22.797 "name": "Existed_Raid", 00:32:22.797 "uuid": "654e003f-b91f-420f-8301-36eee92f0f9b", 00:32:22.797 "strip_size_kb": 64, 00:32:22.797 "state": "offline", 00:32:22.797 "raid_level": "concat", 00:32:22.797 "superblock": true, 00:32:22.797 "num_base_bdevs": 2, 00:32:22.797 "num_base_bdevs_discovered": 1, 00:32:22.797 "num_base_bdevs_operational": 1, 00:32:22.797 "base_bdevs_list": [ 00:32:22.797 { 00:32:22.797 "name": null, 00:32:22.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.797 "is_configured": false, 00:32:22.797 "data_offset": 0, 00:32:22.797 "data_size": 63488 00:32:22.797 }, 00:32:22.797 { 00:32:22.797 "name": "BaseBdev2", 00:32:22.797 "uuid": "85c22e9c-a620-4534-9f0d-9d2bdf7d9556", 00:32:22.797 "is_configured": true, 00:32:22.797 "data_offset": 2048, 00:32:22.797 "data_size": 63488 00:32:22.797 } 00:32:22.797 ] 00:32:22.797 }' 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:22.797 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:23.363 [2024-12-09 23:14:03.745223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:23.363 [2024-12-09 23:14:03.745286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61845 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61845 ']' 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61845 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61845 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:23.363 killing process with pid 61845 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61845' 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61845 00:32:23.363 [2024-12-09 23:14:03.925751] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:23.363 23:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61845 00:32:23.363 [2024-12-09 23:14:03.942736] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:24.738 23:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:24.738 00:32:24.738 real 0m5.249s 00:32:24.738 user 0m7.605s 00:32:24.738 sys 0m0.925s 00:32:24.738 23:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.738 23:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:24.738 ************************************ 00:32:24.738 END TEST raid_state_function_test_sb 00:32:24.738 ************************************ 00:32:24.738 23:14:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:32:24.738 23:14:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:24.738 23:14:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.738 23:14:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:24.738 ************************************ 00:32:24.738 START TEST raid_superblock_test 00:32:24.738 ************************************ 00:32:24.738 23:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62097 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62097 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62097 ']' 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.739 23:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.739 [2024-12-09 23:14:05.275848] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:24.739 [2024-12-09 23:14:05.275992] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62097 ] 00:32:24.998 [2024-12-09 23:14:05.460224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.998 [2024-12-09 23:14:05.577518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.257 [2024-12-09 23:14:05.791902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:25.257 [2024-12-09 23:14:05.791939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.516 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.776 malloc1 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.776 [2024-12-09 23:14:06.181472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:25.776 [2024-12-09 23:14:06.181660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:25.776 [2024-12-09 23:14:06.181721] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:25.776 [2024-12-09 23:14:06.181803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:25.776 [2024-12-09 23:14:06.184487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:25.776 [2024-12-09 23:14:06.184651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:25.776 pt1 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.776 malloc2 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.776 [2024-12-09 23:14:06.237488] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:25.776 [2024-12-09 23:14:06.237661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:25.776 [2024-12-09 23:14:06.237738] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:25.776 [2024-12-09 23:14:06.237825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:25.776 [2024-12-09 23:14:06.240481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:25.776 [2024-12-09 23:14:06.240618] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:25.776 pt2 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.776 [2024-12-09 23:14:06.249542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:25.776 [2024-12-09 23:14:06.251781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:25.776 [2024-12-09 23:14:06.251978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:25.776 [2024-12-09 23:14:06.251993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:25.776 [2024-12-09 23:14:06.252314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:25.776 [2024-12-09 23:14:06.252482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:25.776 [2024-12-09 23:14:06.252498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:25.776 [2024-12-09 23:14:06.252674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:25.776 "name": "raid_bdev1", 00:32:25.776 "uuid": "652662ee-ed90-4120-85de-e0402f5337bb", 00:32:25.776 "strip_size_kb": 64, 00:32:25.776 "state": "online", 00:32:25.776 "raid_level": "concat", 00:32:25.776 "superblock": true, 00:32:25.776 "num_base_bdevs": 2, 00:32:25.776 "num_base_bdevs_discovered": 2, 00:32:25.776 "num_base_bdevs_operational": 2, 00:32:25.776 "base_bdevs_list": [ 00:32:25.776 { 00:32:25.776 "name": "pt1", 00:32:25.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:25.776 "is_configured": true, 00:32:25.776 "data_offset": 2048, 00:32:25.776 "data_size": 63488 00:32:25.776 }, 00:32:25.776 { 00:32:25.776 "name": "pt2", 00:32:25.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:25.776 "is_configured": true, 00:32:25.776 "data_offset": 2048, 00:32:25.776 "data_size": 63488 00:32:25.776 } 00:32:25.776 ] 00:32:25.776 }' 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:25.776 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.345 [2024-12-09 23:14:06.713093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:26.345 "name": "raid_bdev1", 00:32:26.345 "aliases": [ 00:32:26.345 "652662ee-ed90-4120-85de-e0402f5337bb" 00:32:26.345 ], 00:32:26.345 "product_name": "Raid Volume", 00:32:26.345 "block_size": 512, 00:32:26.345 "num_blocks": 126976, 00:32:26.345 "uuid": "652662ee-ed90-4120-85de-e0402f5337bb", 00:32:26.345 "assigned_rate_limits": { 00:32:26.345 "rw_ios_per_sec": 0, 00:32:26.345 "rw_mbytes_per_sec": 0, 00:32:26.345 "r_mbytes_per_sec": 0, 00:32:26.345 "w_mbytes_per_sec": 0 00:32:26.345 }, 00:32:26.345 "claimed": false, 00:32:26.345 "zoned": false, 00:32:26.345 "supported_io_types": { 00:32:26.345 "read": true, 00:32:26.345 "write": true, 00:32:26.345 "unmap": true, 00:32:26.345 "flush": true, 00:32:26.345 "reset": true, 00:32:26.345 "nvme_admin": false, 00:32:26.345 "nvme_io": false, 00:32:26.345 "nvme_io_md": false, 00:32:26.345 "write_zeroes": true, 00:32:26.345 "zcopy": false, 00:32:26.345 "get_zone_info": false, 00:32:26.345 "zone_management": false, 00:32:26.345 "zone_append": false, 00:32:26.345 "compare": false, 00:32:26.345 "compare_and_write": false, 00:32:26.345 "abort": false, 00:32:26.345 "seek_hole": false, 00:32:26.345 "seek_data": false, 00:32:26.345 "copy": false, 00:32:26.345 "nvme_iov_md": false 00:32:26.345 }, 00:32:26.345 "memory_domains": [ 00:32:26.345 { 00:32:26.345 "dma_device_id": "system", 00:32:26.345 "dma_device_type": 1 00:32:26.345 }, 00:32:26.345 { 00:32:26.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:26.345 "dma_device_type": 2 00:32:26.345 }, 00:32:26.345 { 00:32:26.345 "dma_device_id": "system", 00:32:26.345 "dma_device_type": 1 00:32:26.345 }, 00:32:26.345 { 00:32:26.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:26.345 "dma_device_type": 2 00:32:26.345 } 00:32:26.345 ], 00:32:26.345 "driver_specific": { 00:32:26.345 "raid": { 00:32:26.345 "uuid": "652662ee-ed90-4120-85de-e0402f5337bb", 00:32:26.345 "strip_size_kb": 64, 00:32:26.345 "state": "online", 00:32:26.345 "raid_level": "concat", 00:32:26.345 "superblock": true, 00:32:26.345 "num_base_bdevs": 2, 00:32:26.345 "num_base_bdevs_discovered": 2, 00:32:26.345 "num_base_bdevs_operational": 2, 00:32:26.345 "base_bdevs_list": [ 00:32:26.345 { 00:32:26.345 "name": "pt1", 00:32:26.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:26.345 "is_configured": true, 00:32:26.345 "data_offset": 2048, 00:32:26.345 "data_size": 63488 00:32:26.345 }, 00:32:26.345 { 00:32:26.345 "name": "pt2", 00:32:26.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:26.345 "is_configured": true, 00:32:26.345 "data_offset": 2048, 00:32:26.345 "data_size": 63488 00:32:26.345 } 00:32:26.345 ] 00:32:26.345 } 00:32:26.345 } 00:32:26.345 }' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:26.345 pt2' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.345 [2024-12-09 23:14:06.936815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=652662ee-ed90-4120-85de-e0402f5337bb 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 652662ee-ed90-4120-85de-e0402f5337bb ']' 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.345 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.345 [2024-12-09 23:14:06.976507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:26.345 [2024-12-09 23:14:06.976535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:26.345 [2024-12-09 23:14:06.976621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:26.345 [2024-12-09 23:14:06.976673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:26.345 [2024-12-09 23:14:06.976688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:26.603 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.603 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:26.603 23:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.603 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.603 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.603 23:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.603 [2024-12-09 23:14:07.104404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:26.603 [2024-12-09 23:14:07.106708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:26.603 [2024-12-09 23:14:07.106872] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:26.603 [2024-12-09 23:14:07.107044] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:26.603 [2024-12-09 23:14:07.107242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:26.603 [2024-12-09 23:14:07.107341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:26.603 request: 00:32:26.603 { 00:32:26.603 "name": "raid_bdev1", 00:32:26.603 "raid_level": "concat", 00:32:26.603 "base_bdevs": [ 00:32:26.603 "malloc1", 00:32:26.603 "malloc2" 00:32:26.603 ], 00:32:26.603 "strip_size_kb": 64, 00:32:26.603 "superblock": false, 00:32:26.603 "method": "bdev_raid_create", 00:32:26.603 "req_id": 1 00:32:26.603 } 00:32:26.603 Got JSON-RPC error response 00:32:26.603 response: 00:32:26.603 { 00:32:26.603 "code": -17, 00:32:26.603 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:26.603 } 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.603 [2024-12-09 23:14:07.168273] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:26.603 [2024-12-09 23:14:07.168473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:26.603 [2024-12-09 23:14:07.168530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:26.603 [2024-12-09 23:14:07.168603] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:26.603 [2024-12-09 23:14:07.171114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:26.603 [2024-12-09 23:14:07.171251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:26.603 [2024-12-09 23:14:07.171427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:26.603 [2024-12-09 23:14:07.171523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:26.603 pt1 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:26.603 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:26.604 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.604 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.604 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.604 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.604 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.604 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:26.604 "name": "raid_bdev1", 00:32:26.604 "uuid": "652662ee-ed90-4120-85de-e0402f5337bb", 00:32:26.604 "strip_size_kb": 64, 00:32:26.604 "state": "configuring", 00:32:26.604 "raid_level": "concat", 00:32:26.604 "superblock": true, 00:32:26.604 "num_base_bdevs": 2, 00:32:26.604 "num_base_bdevs_discovered": 1, 00:32:26.604 "num_base_bdevs_operational": 2, 00:32:26.604 "base_bdevs_list": [ 00:32:26.604 { 00:32:26.604 "name": "pt1", 00:32:26.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:26.604 "is_configured": true, 00:32:26.604 "data_offset": 2048, 00:32:26.604 "data_size": 63488 00:32:26.604 }, 00:32:26.604 { 00:32:26.604 "name": null, 00:32:26.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:26.604 "is_configured": false, 00:32:26.604 "data_offset": 2048, 00:32:26.604 "data_size": 63488 00:32:26.604 } 00:32:26.604 ] 00:32:26.604 }' 00:32:26.604 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:26.604 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.171 [2024-12-09 23:14:07.635673] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:27.171 [2024-12-09 23:14:07.635898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.171 [2024-12-09 23:14:07.635935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:27.171 [2024-12-09 23:14:07.635954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.171 [2024-12-09 23:14:07.636481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.171 [2024-12-09 23:14:07.636510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:27.171 [2024-12-09 23:14:07.636605] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:27.171 [2024-12-09 23:14:07.636636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:27.171 [2024-12-09 23:14:07.636764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:27.171 [2024-12-09 23:14:07.636779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:27.171 [2024-12-09 23:14:07.637066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:27.171 [2024-12-09 23:14:07.637210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:27.171 [2024-12-09 23:14:07.637226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:27.171 [2024-12-09 23:14:07.637381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:27.171 pt2 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.171 "name": "raid_bdev1", 00:32:27.171 "uuid": "652662ee-ed90-4120-85de-e0402f5337bb", 00:32:27.171 "strip_size_kb": 64, 00:32:27.171 "state": "online", 00:32:27.171 "raid_level": "concat", 00:32:27.171 "superblock": true, 00:32:27.171 "num_base_bdevs": 2, 00:32:27.171 "num_base_bdevs_discovered": 2, 00:32:27.171 "num_base_bdevs_operational": 2, 00:32:27.171 "base_bdevs_list": [ 00:32:27.171 { 00:32:27.171 "name": "pt1", 00:32:27.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:27.171 "is_configured": true, 00:32:27.171 "data_offset": 2048, 00:32:27.171 "data_size": 63488 00:32:27.171 }, 00:32:27.171 { 00:32:27.171 "name": "pt2", 00:32:27.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:27.171 "is_configured": true, 00:32:27.171 "data_offset": 2048, 00:32:27.171 "data_size": 63488 00:32:27.171 } 00:32:27.171 ] 00:32:27.171 }' 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.171 23:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.431 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:27.431 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:27.431 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:27.431 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:27.431 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:27.431 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:27.701 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:27.701 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:27.701 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.701 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.701 [2024-12-09 23:14:08.075318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:27.701 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:27.702 "name": "raid_bdev1", 00:32:27.702 "aliases": [ 00:32:27.702 "652662ee-ed90-4120-85de-e0402f5337bb" 00:32:27.702 ], 00:32:27.702 "product_name": "Raid Volume", 00:32:27.702 "block_size": 512, 00:32:27.702 "num_blocks": 126976, 00:32:27.702 "uuid": "652662ee-ed90-4120-85de-e0402f5337bb", 00:32:27.702 "assigned_rate_limits": { 00:32:27.702 "rw_ios_per_sec": 0, 00:32:27.702 "rw_mbytes_per_sec": 0, 00:32:27.702 "r_mbytes_per_sec": 0, 00:32:27.702 "w_mbytes_per_sec": 0 00:32:27.702 }, 00:32:27.702 "claimed": false, 00:32:27.702 "zoned": false, 00:32:27.702 "supported_io_types": { 00:32:27.702 "read": true, 00:32:27.702 "write": true, 00:32:27.702 "unmap": true, 00:32:27.702 "flush": true, 00:32:27.702 "reset": true, 00:32:27.702 "nvme_admin": false, 00:32:27.702 "nvme_io": false, 00:32:27.702 "nvme_io_md": false, 00:32:27.702 "write_zeroes": true, 00:32:27.702 "zcopy": false, 00:32:27.702 "get_zone_info": false, 00:32:27.702 "zone_management": false, 00:32:27.702 "zone_append": false, 00:32:27.702 "compare": false, 00:32:27.702 "compare_and_write": false, 00:32:27.702 "abort": false, 00:32:27.702 "seek_hole": false, 00:32:27.702 "seek_data": false, 00:32:27.702 "copy": false, 00:32:27.702 "nvme_iov_md": false 00:32:27.702 }, 00:32:27.702 "memory_domains": [ 00:32:27.702 { 00:32:27.702 "dma_device_id": "system", 00:32:27.702 "dma_device_type": 1 00:32:27.702 }, 00:32:27.702 { 00:32:27.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.702 "dma_device_type": 2 00:32:27.702 }, 00:32:27.702 { 00:32:27.702 "dma_device_id": "system", 00:32:27.702 "dma_device_type": 1 00:32:27.702 }, 00:32:27.702 { 00:32:27.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.702 "dma_device_type": 2 00:32:27.702 } 00:32:27.702 ], 00:32:27.702 "driver_specific": { 00:32:27.702 "raid": { 00:32:27.702 "uuid": "652662ee-ed90-4120-85de-e0402f5337bb", 00:32:27.702 "strip_size_kb": 64, 00:32:27.702 "state": "online", 00:32:27.702 "raid_level": "concat", 00:32:27.702 "superblock": true, 00:32:27.702 "num_base_bdevs": 2, 00:32:27.702 "num_base_bdevs_discovered": 2, 00:32:27.702 "num_base_bdevs_operational": 2, 00:32:27.702 "base_bdevs_list": [ 00:32:27.702 { 00:32:27.702 "name": "pt1", 00:32:27.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:27.702 "is_configured": true, 00:32:27.702 "data_offset": 2048, 00:32:27.702 "data_size": 63488 00:32:27.702 }, 00:32:27.702 { 00:32:27.702 "name": "pt2", 00:32:27.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:27.702 "is_configured": true, 00:32:27.702 "data_offset": 2048, 00:32:27.702 "data_size": 63488 00:32:27.702 } 00:32:27.702 ] 00:32:27.702 } 00:32:27.702 } 00:32:27.702 }' 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:27.702 pt2' 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.702 [2024-12-09 23:14:08.306981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:27.702 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 652662ee-ed90-4120-85de-e0402f5337bb '!=' 652662ee-ed90-4120-85de-e0402f5337bb ']' 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62097 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62097 ']' 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62097 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62097 00:32:27.972 killing process with pid 62097 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62097' 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62097 00:32:27.972 [2024-12-09 23:14:08.381792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:27.972 [2024-12-09 23:14:08.381896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:27.972 23:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62097 00:32:27.972 [2024-12-09 23:14:08.381951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:27.972 [2024-12-09 23:14:08.381968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:27.973 [2024-12-09 23:14:08.596352] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:29.348 23:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:29.348 00:32:29.348 real 0m4.573s 00:32:29.348 user 0m6.435s 00:32:29.348 sys 0m0.851s 00:32:29.348 23:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.348 ************************************ 00:32:29.348 END TEST raid_superblock_test 00:32:29.349 ************************************ 00:32:29.349 23:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.349 23:14:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:32:29.349 23:14:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:29.349 23:14:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.349 23:14:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:29.349 ************************************ 00:32:29.349 START TEST raid_read_error_test 00:32:29.349 ************************************ 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fnetLmHetW 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62308 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62308 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62308 ']' 00:32:29.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.349 23:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.349 [2024-12-09 23:14:09.928973] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:29.349 [2024-12-09 23:14:09.929100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62308 ] 00:32:29.607 [2024-12-09 23:14:10.110237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.607 [2024-12-09 23:14:10.226432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.866 [2024-12-09 23:14:10.450390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:29.866 [2024-12-09 23:14:10.450672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.437 BaseBdev1_malloc 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.437 true 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.437 [2024-12-09 23:14:10.874334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:30.437 [2024-12-09 23:14:10.874532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.437 [2024-12-09 23:14:10.874598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:30.437 [2024-12-09 23:14:10.874693] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.437 [2024-12-09 23:14:10.877380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.437 [2024-12-09 23:14:10.877571] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:30.437 BaseBdev1 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.437 BaseBdev2_malloc 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.437 true 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.437 [2024-12-09 23:14:10.946227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:30.437 [2024-12-09 23:14:10.946289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.437 [2024-12-09 23:14:10.946310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:30.437 [2024-12-09 23:14:10.946325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.437 [2024-12-09 23:14:10.948859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.437 [2024-12-09 23:14:10.948905] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:30.437 BaseBdev2 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.437 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.438 [2024-12-09 23:14:10.958280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:30.438 [2024-12-09 23:14:10.960714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:30.438 [2024-12-09 23:14:10.961049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:30.438 [2024-12-09 23:14:10.961117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:30.438 [2024-12-09 23:14:10.961530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:30.438 [2024-12-09 23:14:10.961895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:30.438 [2024-12-09 23:14:10.962004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:30.438 [2024-12-09 23:14:10.962431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.438 23:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.438 23:14:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.438 "name": "raid_bdev1", 00:32:30.438 "uuid": "22117828-7cb8-48ba-a93a-8c342918998b", 00:32:30.438 "strip_size_kb": 64, 00:32:30.438 "state": "online", 00:32:30.438 "raid_level": "concat", 00:32:30.438 "superblock": true, 00:32:30.438 "num_base_bdevs": 2, 00:32:30.438 "num_base_bdevs_discovered": 2, 00:32:30.438 "num_base_bdevs_operational": 2, 00:32:30.438 "base_bdevs_list": [ 00:32:30.438 { 00:32:30.438 "name": "BaseBdev1", 00:32:30.438 "uuid": "be855799-253e-5b7b-a313-37f8a31f96e2", 00:32:30.438 "is_configured": true, 00:32:30.438 "data_offset": 2048, 00:32:30.438 "data_size": 63488 00:32:30.438 }, 00:32:30.438 { 00:32:30.438 "name": "BaseBdev2", 00:32:30.438 "uuid": "4746606e-2f6a-5d18-8ad9-bac4ccb84d21", 00:32:30.438 "is_configured": true, 00:32:30.438 "data_offset": 2048, 00:32:30.438 "data_size": 63488 00:32:30.438 } 00:32:30.438 ] 00:32:30.438 }' 00:32:30.438 23:14:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.438 23:14:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.008 23:14:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:31.008 23:14:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:31.008 [2024-12-09 23:14:11.515585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:31.942 "name": "raid_bdev1", 00:32:31.942 "uuid": "22117828-7cb8-48ba-a93a-8c342918998b", 00:32:31.942 "strip_size_kb": 64, 00:32:31.942 "state": "online", 00:32:31.942 "raid_level": "concat", 00:32:31.942 "superblock": true, 00:32:31.942 "num_base_bdevs": 2, 00:32:31.942 "num_base_bdevs_discovered": 2, 00:32:31.942 "num_base_bdevs_operational": 2, 00:32:31.942 "base_bdevs_list": [ 00:32:31.942 { 00:32:31.942 "name": "BaseBdev1", 00:32:31.942 "uuid": "be855799-253e-5b7b-a313-37f8a31f96e2", 00:32:31.942 "is_configured": true, 00:32:31.942 "data_offset": 2048, 00:32:31.942 "data_size": 63488 00:32:31.942 }, 00:32:31.942 { 00:32:31.942 "name": "BaseBdev2", 00:32:31.942 "uuid": "4746606e-2f6a-5d18-8ad9-bac4ccb84d21", 00:32:31.942 "is_configured": true, 00:32:31.942 "data_offset": 2048, 00:32:31.942 "data_size": 63488 00:32:31.942 } 00:32:31.942 ] 00:32:31.942 }' 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:31.942 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.509 [2024-12-09 23:14:12.880514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:32.509 [2024-12-09 23:14:12.880564] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:32.509 [2024-12-09 23:14:12.883811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:32.509 [2024-12-09 23:14:12.883863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:32.509 [2024-12-09 23:14:12.883896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:32.509 [2024-12-09 23:14:12.883911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:32.509 { 00:32:32.509 "results": [ 00:32:32.509 { 00:32:32.509 "job": "raid_bdev1", 00:32:32.509 "core_mask": "0x1", 00:32:32.509 "workload": "randrw", 00:32:32.509 "percentage": 50, 00:32:32.509 "status": "finished", 00:32:32.509 "queue_depth": 1, 00:32:32.509 "io_size": 131072, 00:32:32.509 "runtime": 1.365181, 00:32:32.509 "iops": 15436.04840676804, 00:32:32.509 "mibps": 1929.506050846005, 00:32:32.509 "io_failed": 1, 00:32:32.509 "io_timeout": 0, 00:32:32.509 "avg_latency_us": 89.45654040666795, 00:32:32.509 "min_latency_us": 27.553413654618474, 00:32:32.509 "max_latency_us": 1500.2216867469879 00:32:32.509 } 00:32:32.509 ], 00:32:32.509 "core_count": 1 00:32:32.509 } 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62308 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62308 ']' 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62308 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62308 00:32:32.509 killing process with pid 62308 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62308' 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62308 00:32:32.509 [2024-12-09 23:14:12.934187] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:32.509 23:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62308 00:32:32.509 [2024-12-09 23:14:13.076474] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:33.892 23:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fnetLmHetW 00:32:33.892 23:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:33.892 23:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:33.892 23:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:32:33.892 23:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:32:33.892 23:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:33.892 23:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:33.892 23:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:32:33.892 00:32:33.892 real 0m4.510s 00:32:33.892 user 0m5.378s 00:32:33.892 sys 0m0.621s 00:32:33.892 23:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.892 23:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.892 ************************************ 00:32:33.892 END TEST raid_read_error_test 00:32:33.892 ************************************ 00:32:33.892 23:14:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:32:33.892 23:14:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:33.892 23:14:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.892 23:14:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:33.892 ************************************ 00:32:33.892 START TEST raid_write_error_test 00:32:33.892 ************************************ 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3Zy87zVR9K 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62454 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62454 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62454 ']' 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.892 23:14:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.892 [2024-12-09 23:14:14.509967] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:33.892 [2024-12-09 23:14:14.510110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62454 ] 00:32:34.154 [2024-12-09 23:14:14.700578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.419 [2024-12-09 23:14:14.819938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.419 [2024-12-09 23:14:15.031749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:34.419 [2024-12-09 23:14:15.031804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.986 BaseBdev1_malloc 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.986 true 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.986 [2024-12-09 23:14:15.405804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:34.986 [2024-12-09 23:14:15.406006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.986 [2024-12-09 23:14:15.406044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:34.986 [2024-12-09 23:14:15.406070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.986 [2024-12-09 23:14:15.408809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.986 [2024-12-09 23:14:15.408855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:34.986 BaseBdev1 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.986 BaseBdev2_malloc 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.986 true 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.986 [2024-12-09 23:14:15.475254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:34.986 [2024-12-09 23:14:15.475451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.986 [2024-12-09 23:14:15.475493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:34.986 [2024-12-09 23:14:15.475519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.986 [2024-12-09 23:14:15.478040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.986 [2024-12-09 23:14:15.478093] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:34.986 BaseBdev2 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.986 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.987 [2024-12-09 23:14:15.487311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:34.987 [2024-12-09 23:14:15.489472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:34.987 [2024-12-09 23:14:15.489839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:34.987 [2024-12-09 23:14:15.489865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:34.987 [2024-12-09 23:14:15.490130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:34.987 [2024-12-09 23:14:15.490303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:34.987 [2024-12-09 23:14:15.490317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:34.987 [2024-12-09 23:14:15.490488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:34.987 "name": "raid_bdev1", 00:32:34.987 "uuid": "270337d8-2cb8-4532-b06a-f48079beb989", 00:32:34.987 "strip_size_kb": 64, 00:32:34.987 "state": "online", 00:32:34.987 "raid_level": "concat", 00:32:34.987 "superblock": true, 00:32:34.987 "num_base_bdevs": 2, 00:32:34.987 "num_base_bdevs_discovered": 2, 00:32:34.987 "num_base_bdevs_operational": 2, 00:32:34.987 "base_bdevs_list": [ 00:32:34.987 { 00:32:34.987 "name": "BaseBdev1", 00:32:34.987 "uuid": "4dd9c19c-a373-57cc-aed6-e9c2a2db507e", 00:32:34.987 "is_configured": true, 00:32:34.987 "data_offset": 2048, 00:32:34.987 "data_size": 63488 00:32:34.987 }, 00:32:34.987 { 00:32:34.987 "name": "BaseBdev2", 00:32:34.987 "uuid": "b6f37ec3-60fd-5365-8913-d57ffca0cebd", 00:32:34.987 "is_configured": true, 00:32:34.987 "data_offset": 2048, 00:32:34.987 "data_size": 63488 00:32:34.987 } 00:32:34.987 ] 00:32:34.987 }' 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:34.987 23:14:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.554 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:35.554 23:14:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:35.554 [2024-12-09 23:14:15.979931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:36.496 "name": "raid_bdev1", 00:32:36.496 "uuid": "270337d8-2cb8-4532-b06a-f48079beb989", 00:32:36.496 "strip_size_kb": 64, 00:32:36.496 "state": "online", 00:32:36.496 "raid_level": "concat", 00:32:36.496 "superblock": true, 00:32:36.496 "num_base_bdevs": 2, 00:32:36.496 "num_base_bdevs_discovered": 2, 00:32:36.496 "num_base_bdevs_operational": 2, 00:32:36.496 "base_bdevs_list": [ 00:32:36.496 { 00:32:36.496 "name": "BaseBdev1", 00:32:36.496 "uuid": "4dd9c19c-a373-57cc-aed6-e9c2a2db507e", 00:32:36.496 "is_configured": true, 00:32:36.496 "data_offset": 2048, 00:32:36.496 "data_size": 63488 00:32:36.496 }, 00:32:36.496 { 00:32:36.496 "name": "BaseBdev2", 00:32:36.496 "uuid": "b6f37ec3-60fd-5365-8913-d57ffca0cebd", 00:32:36.496 "is_configured": true, 00:32:36.496 "data_offset": 2048, 00:32:36.496 "data_size": 63488 00:32:36.496 } 00:32:36.496 ] 00:32:36.496 }' 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:36.496 23:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.755 23:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:36.755 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.755 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.755 [2024-12-09 23:14:17.338727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:36.755 [2024-12-09 23:14:17.338764] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:36.755 [2024-12-09 23:14:17.341597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:36.755 [2024-12-09 23:14:17.341642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:36.755 [2024-12-09 23:14:17.341675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:36.755 [2024-12-09 23:14:17.341695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:36.755 { 00:32:36.755 "results": [ 00:32:36.755 { 00:32:36.755 "job": "raid_bdev1", 00:32:36.755 "core_mask": "0x1", 00:32:36.755 "workload": "randrw", 00:32:36.755 "percentage": 50, 00:32:36.755 "status": "finished", 00:32:36.755 "queue_depth": 1, 00:32:36.755 "io_size": 131072, 00:32:36.755 "runtime": 1.358699, 00:32:36.755 "iops": 15726.80924914201, 00:32:36.755 "mibps": 1965.8511561427513, 00:32:36.755 "io_failed": 1, 00:32:36.755 "io_timeout": 0, 00:32:36.755 "avg_latency_us": 87.5114556405227, 00:32:36.755 "min_latency_us": 27.553413654618474, 00:32:36.755 "max_latency_us": 1566.0208835341366 00:32:36.755 } 00:32:36.755 ], 00:32:36.755 "core_count": 1 00:32:36.755 } 00:32:36.755 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.756 23:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62454 00:32:36.756 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62454 ']' 00:32:36.756 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62454 00:32:36.756 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:32:36.756 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:36.756 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62454 00:32:37.014 killing process with pid 62454 00:32:37.014 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.014 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.014 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62454' 00:32:37.014 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62454 00:32:37.014 [2024-12-09 23:14:17.391436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:37.014 23:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62454 00:32:37.014 [2024-12-09 23:14:17.529145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:38.470 23:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3Zy87zVR9K 00:32:38.470 23:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:38.470 23:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:38.470 23:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:32:38.470 23:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:32:38.470 23:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:38.470 23:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:38.470 23:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:32:38.470 00:32:38.470 real 0m4.342s 00:32:38.470 user 0m5.124s 00:32:38.470 sys 0m0.579s 00:32:38.470 23:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:38.470 ************************************ 00:32:38.470 END TEST raid_write_error_test 00:32:38.470 ************************************ 00:32:38.470 23:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:38.470 23:14:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:32:38.470 23:14:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:32:38.470 23:14:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:38.470 23:14:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:38.470 23:14:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:38.470 ************************************ 00:32:38.470 START TEST raid_state_function_test 00:32:38.470 ************************************ 00:32:38.470 23:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:32:38.470 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:32:38.470 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:38.470 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:32:38.470 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:38.470 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:38.470 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:38.470 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62592 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:38.471 Process raid pid: 62592 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62592' 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62592 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62592 ']' 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.471 23:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:38.471 [2024-12-09 23:14:18.924728] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:38.471 [2024-12-09 23:14:18.924853] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.471 [2024-12-09 23:14:19.100833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.730 [2024-12-09 23:14:19.224729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.989 [2024-12-09 23:14:19.447993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:38.989 [2024-12-09 23:14:19.448222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.247 [2024-12-09 23:14:19.766395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:39.247 [2024-12-09 23:14:19.766649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:39.247 [2024-12-09 23:14:19.766688] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:39.247 [2024-12-09 23:14:19.766716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:39.247 "name": "Existed_Raid", 00:32:39.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.247 "strip_size_kb": 0, 00:32:39.247 "state": "configuring", 00:32:39.247 "raid_level": "raid1", 00:32:39.247 "superblock": false, 00:32:39.247 "num_base_bdevs": 2, 00:32:39.247 "num_base_bdevs_discovered": 0, 00:32:39.247 "num_base_bdevs_operational": 2, 00:32:39.247 "base_bdevs_list": [ 00:32:39.247 { 00:32:39.247 "name": "BaseBdev1", 00:32:39.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.247 "is_configured": false, 00:32:39.247 "data_offset": 0, 00:32:39.247 "data_size": 0 00:32:39.247 }, 00:32:39.247 { 00:32:39.247 "name": "BaseBdev2", 00:32:39.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.247 "is_configured": false, 00:32:39.247 "data_offset": 0, 00:32:39.247 "data_size": 0 00:32:39.247 } 00:32:39.247 ] 00:32:39.247 }' 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:39.247 23:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 [2024-12-09 23:14:20.210291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:39.815 [2024-12-09 23:14:20.210511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 [2024-12-09 23:14:20.222286] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:39.815 [2024-12-09 23:14:20.222341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:39.815 [2024-12-09 23:14:20.222353] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:39.815 [2024-12-09 23:14:20.222371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 [2024-12-09 23:14:20.270937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:39.815 BaseBdev1 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 [ 00:32:39.815 { 00:32:39.815 "name": "BaseBdev1", 00:32:39.815 "aliases": [ 00:32:39.815 "a6d998dc-6e98-4506-befd-71b4481e0c2c" 00:32:39.815 ], 00:32:39.815 "product_name": "Malloc disk", 00:32:39.815 "block_size": 512, 00:32:39.815 "num_blocks": 65536, 00:32:39.815 "uuid": "a6d998dc-6e98-4506-befd-71b4481e0c2c", 00:32:39.815 "assigned_rate_limits": { 00:32:39.815 "rw_ios_per_sec": 0, 00:32:39.815 "rw_mbytes_per_sec": 0, 00:32:39.815 "r_mbytes_per_sec": 0, 00:32:39.815 "w_mbytes_per_sec": 0 00:32:39.815 }, 00:32:39.815 "claimed": true, 00:32:39.815 "claim_type": "exclusive_write", 00:32:39.815 "zoned": false, 00:32:39.815 "supported_io_types": { 00:32:39.815 "read": true, 00:32:39.815 "write": true, 00:32:39.815 "unmap": true, 00:32:39.815 "flush": true, 00:32:39.815 "reset": true, 00:32:39.815 "nvme_admin": false, 00:32:39.815 "nvme_io": false, 00:32:39.815 "nvme_io_md": false, 00:32:39.815 "write_zeroes": true, 00:32:39.815 "zcopy": true, 00:32:39.815 "get_zone_info": false, 00:32:39.815 "zone_management": false, 00:32:39.815 "zone_append": false, 00:32:39.815 "compare": false, 00:32:39.815 "compare_and_write": false, 00:32:39.815 "abort": true, 00:32:39.815 "seek_hole": false, 00:32:39.815 "seek_data": false, 00:32:39.815 "copy": true, 00:32:39.815 "nvme_iov_md": false 00:32:39.815 }, 00:32:39.815 "memory_domains": [ 00:32:39.815 { 00:32:39.815 "dma_device_id": "system", 00:32:39.815 "dma_device_type": 1 00:32:39.815 }, 00:32:39.815 { 00:32:39.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.815 "dma_device_type": 2 00:32:39.815 } 00:32:39.815 ], 00:32:39.815 "driver_specific": {} 00:32:39.815 } 00:32:39.815 ] 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:39.815 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:39.816 "name": "Existed_Raid", 00:32:39.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.816 "strip_size_kb": 0, 00:32:39.816 "state": "configuring", 00:32:39.816 "raid_level": "raid1", 00:32:39.816 "superblock": false, 00:32:39.816 "num_base_bdevs": 2, 00:32:39.816 "num_base_bdevs_discovered": 1, 00:32:39.816 "num_base_bdevs_operational": 2, 00:32:39.816 "base_bdevs_list": [ 00:32:39.816 { 00:32:39.816 "name": "BaseBdev1", 00:32:39.816 "uuid": "a6d998dc-6e98-4506-befd-71b4481e0c2c", 00:32:39.816 "is_configured": true, 00:32:39.816 "data_offset": 0, 00:32:39.816 "data_size": 65536 00:32:39.816 }, 00:32:39.816 { 00:32:39.816 "name": "BaseBdev2", 00:32:39.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.816 "is_configured": false, 00:32:39.816 "data_offset": 0, 00:32:39.816 "data_size": 0 00:32:39.816 } 00:32:39.816 ] 00:32:39.816 }' 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:39.816 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.075 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:40.075 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.075 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.075 [2024-12-09 23:14:20.694406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:40.076 [2024-12-09 23:14:20.694475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:40.076 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.076 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:40.076 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.076 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.076 [2024-12-09 23:14:20.706429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:40.076 [2024-12-09 23:14:20.708757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:40.076 [2024-12-09 23:14:20.708911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:40.335 "name": "Existed_Raid", 00:32:40.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.335 "strip_size_kb": 0, 00:32:40.335 "state": "configuring", 00:32:40.335 "raid_level": "raid1", 00:32:40.335 "superblock": false, 00:32:40.335 "num_base_bdevs": 2, 00:32:40.335 "num_base_bdevs_discovered": 1, 00:32:40.335 "num_base_bdevs_operational": 2, 00:32:40.335 "base_bdevs_list": [ 00:32:40.335 { 00:32:40.335 "name": "BaseBdev1", 00:32:40.335 "uuid": "a6d998dc-6e98-4506-befd-71b4481e0c2c", 00:32:40.335 "is_configured": true, 00:32:40.335 "data_offset": 0, 00:32:40.335 "data_size": 65536 00:32:40.335 }, 00:32:40.335 { 00:32:40.335 "name": "BaseBdev2", 00:32:40.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.335 "is_configured": false, 00:32:40.335 "data_offset": 0, 00:32:40.335 "data_size": 0 00:32:40.335 } 00:32:40.335 ] 00:32:40.335 }' 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:40.335 23:14:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.594 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:40.594 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.595 [2024-12-09 23:14:21.154857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:40.595 [2024-12-09 23:14:21.154917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:40.595 [2024-12-09 23:14:21.154928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:32:40.595 [2024-12-09 23:14:21.155212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:40.595 [2024-12-09 23:14:21.155432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:40.595 [2024-12-09 23:14:21.155449] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:40.595 BaseBdev2 00:32:40.595 [2024-12-09 23:14:21.155758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.595 [ 00:32:40.595 { 00:32:40.595 "name": "BaseBdev2", 00:32:40.595 "aliases": [ 00:32:40.595 "cdf81297-cb6c-4b14-a7e8-bd667a194065" 00:32:40.595 ], 00:32:40.595 "product_name": "Malloc disk", 00:32:40.595 "block_size": 512, 00:32:40.595 "num_blocks": 65536, 00:32:40.595 "uuid": "cdf81297-cb6c-4b14-a7e8-bd667a194065", 00:32:40.595 "assigned_rate_limits": { 00:32:40.595 "rw_ios_per_sec": 0, 00:32:40.595 "rw_mbytes_per_sec": 0, 00:32:40.595 "r_mbytes_per_sec": 0, 00:32:40.595 "w_mbytes_per_sec": 0 00:32:40.595 }, 00:32:40.595 "claimed": true, 00:32:40.595 "claim_type": "exclusive_write", 00:32:40.595 "zoned": false, 00:32:40.595 "supported_io_types": { 00:32:40.595 "read": true, 00:32:40.595 "write": true, 00:32:40.595 "unmap": true, 00:32:40.595 "flush": true, 00:32:40.595 "reset": true, 00:32:40.595 "nvme_admin": false, 00:32:40.595 "nvme_io": false, 00:32:40.595 "nvme_io_md": false, 00:32:40.595 "write_zeroes": true, 00:32:40.595 "zcopy": true, 00:32:40.595 "get_zone_info": false, 00:32:40.595 "zone_management": false, 00:32:40.595 "zone_append": false, 00:32:40.595 "compare": false, 00:32:40.595 "compare_and_write": false, 00:32:40.595 "abort": true, 00:32:40.595 "seek_hole": false, 00:32:40.595 "seek_data": false, 00:32:40.595 "copy": true, 00:32:40.595 "nvme_iov_md": false 00:32:40.595 }, 00:32:40.595 "memory_domains": [ 00:32:40.595 { 00:32:40.595 "dma_device_id": "system", 00:32:40.595 "dma_device_type": 1 00:32:40.595 }, 00:32:40.595 { 00:32:40.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.595 "dma_device_type": 2 00:32:40.595 } 00:32:40.595 ], 00:32:40.595 "driver_specific": {} 00:32:40.595 } 00:32:40.595 ] 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.595 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.854 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.854 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:40.854 "name": "Existed_Raid", 00:32:40.854 "uuid": "3c7d54d2-7213-4d06-aa2f-3da20a8d0c89", 00:32:40.854 "strip_size_kb": 0, 00:32:40.854 "state": "online", 00:32:40.854 "raid_level": "raid1", 00:32:40.854 "superblock": false, 00:32:40.854 "num_base_bdevs": 2, 00:32:40.854 "num_base_bdevs_discovered": 2, 00:32:40.854 "num_base_bdevs_operational": 2, 00:32:40.854 "base_bdevs_list": [ 00:32:40.854 { 00:32:40.854 "name": "BaseBdev1", 00:32:40.854 "uuid": "a6d998dc-6e98-4506-befd-71b4481e0c2c", 00:32:40.854 "is_configured": true, 00:32:40.854 "data_offset": 0, 00:32:40.854 "data_size": 65536 00:32:40.854 }, 00:32:40.854 { 00:32:40.854 "name": "BaseBdev2", 00:32:40.854 "uuid": "cdf81297-cb6c-4b14-a7e8-bd667a194065", 00:32:40.854 "is_configured": true, 00:32:40.854 "data_offset": 0, 00:32:40.854 "data_size": 65536 00:32:40.854 } 00:32:40.854 ] 00:32:40.854 }' 00:32:40.854 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:40.854 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.116 [2024-12-09 23:14:21.610598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.116 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:41.116 "name": "Existed_Raid", 00:32:41.116 "aliases": [ 00:32:41.116 "3c7d54d2-7213-4d06-aa2f-3da20a8d0c89" 00:32:41.116 ], 00:32:41.116 "product_name": "Raid Volume", 00:32:41.116 "block_size": 512, 00:32:41.116 "num_blocks": 65536, 00:32:41.116 "uuid": "3c7d54d2-7213-4d06-aa2f-3da20a8d0c89", 00:32:41.116 "assigned_rate_limits": { 00:32:41.116 "rw_ios_per_sec": 0, 00:32:41.116 "rw_mbytes_per_sec": 0, 00:32:41.116 "r_mbytes_per_sec": 0, 00:32:41.116 "w_mbytes_per_sec": 0 00:32:41.116 }, 00:32:41.117 "claimed": false, 00:32:41.117 "zoned": false, 00:32:41.117 "supported_io_types": { 00:32:41.117 "read": true, 00:32:41.117 "write": true, 00:32:41.117 "unmap": false, 00:32:41.117 "flush": false, 00:32:41.117 "reset": true, 00:32:41.117 "nvme_admin": false, 00:32:41.117 "nvme_io": false, 00:32:41.117 "nvme_io_md": false, 00:32:41.117 "write_zeroes": true, 00:32:41.117 "zcopy": false, 00:32:41.117 "get_zone_info": false, 00:32:41.117 "zone_management": false, 00:32:41.117 "zone_append": false, 00:32:41.117 "compare": false, 00:32:41.117 "compare_and_write": false, 00:32:41.117 "abort": false, 00:32:41.117 "seek_hole": false, 00:32:41.117 "seek_data": false, 00:32:41.117 "copy": false, 00:32:41.117 "nvme_iov_md": false 00:32:41.117 }, 00:32:41.117 "memory_domains": [ 00:32:41.117 { 00:32:41.117 "dma_device_id": "system", 00:32:41.117 "dma_device_type": 1 00:32:41.117 }, 00:32:41.117 { 00:32:41.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.117 "dma_device_type": 2 00:32:41.117 }, 00:32:41.117 { 00:32:41.117 "dma_device_id": "system", 00:32:41.117 "dma_device_type": 1 00:32:41.117 }, 00:32:41.117 { 00:32:41.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.117 "dma_device_type": 2 00:32:41.117 } 00:32:41.117 ], 00:32:41.117 "driver_specific": { 00:32:41.117 "raid": { 00:32:41.117 "uuid": "3c7d54d2-7213-4d06-aa2f-3da20a8d0c89", 00:32:41.117 "strip_size_kb": 0, 00:32:41.117 "state": "online", 00:32:41.117 "raid_level": "raid1", 00:32:41.117 "superblock": false, 00:32:41.117 "num_base_bdevs": 2, 00:32:41.117 "num_base_bdevs_discovered": 2, 00:32:41.117 "num_base_bdevs_operational": 2, 00:32:41.117 "base_bdevs_list": [ 00:32:41.117 { 00:32:41.117 "name": "BaseBdev1", 00:32:41.117 "uuid": "a6d998dc-6e98-4506-befd-71b4481e0c2c", 00:32:41.117 "is_configured": true, 00:32:41.117 "data_offset": 0, 00:32:41.117 "data_size": 65536 00:32:41.117 }, 00:32:41.117 { 00:32:41.117 "name": "BaseBdev2", 00:32:41.117 "uuid": "cdf81297-cb6c-4b14-a7e8-bd667a194065", 00:32:41.117 "is_configured": true, 00:32:41.117 "data_offset": 0, 00:32:41.117 "data_size": 65536 00:32:41.117 } 00:32:41.117 ] 00:32:41.117 } 00:32:41.117 } 00:32:41.117 }' 00:32:41.117 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:41.117 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:41.117 BaseBdev2' 00:32:41.117 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:41.117 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:41.117 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:41.117 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:41.117 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:41.117 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.117 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.117 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.380 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.381 [2024-12-09 23:14:21.822286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:41.381 "name": "Existed_Raid", 00:32:41.381 "uuid": "3c7d54d2-7213-4d06-aa2f-3da20a8d0c89", 00:32:41.381 "strip_size_kb": 0, 00:32:41.381 "state": "online", 00:32:41.381 "raid_level": "raid1", 00:32:41.381 "superblock": false, 00:32:41.381 "num_base_bdevs": 2, 00:32:41.381 "num_base_bdevs_discovered": 1, 00:32:41.381 "num_base_bdevs_operational": 1, 00:32:41.381 "base_bdevs_list": [ 00:32:41.381 { 00:32:41.381 "name": null, 00:32:41.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.381 "is_configured": false, 00:32:41.381 "data_offset": 0, 00:32:41.381 "data_size": 65536 00:32:41.381 }, 00:32:41.381 { 00:32:41.381 "name": "BaseBdev2", 00:32:41.381 "uuid": "cdf81297-cb6c-4b14-a7e8-bd667a194065", 00:32:41.381 "is_configured": true, 00:32:41.381 "data_offset": 0, 00:32:41.381 "data_size": 65536 00:32:41.381 } 00:32:41.381 ] 00:32:41.381 }' 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:41.381 23:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.948 [2024-12-09 23:14:22.392616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:41.948 [2024-12-09 23:14:22.392721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:41.948 [2024-12-09 23:14:22.497206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:41.948 [2024-12-09 23:14:22.497409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:41.948 [2024-12-09 23:14:22.497594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:41.948 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:41.949 23:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62592 00:32:41.949 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62592 ']' 00:32:41.949 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62592 00:32:41.949 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:32:41.949 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.949 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62592 00:32:42.207 killing process with pid 62592 00:32:42.207 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.207 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.207 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62592' 00:32:42.207 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62592 00:32:42.207 [2024-12-09 23:14:22.594730] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:42.207 23:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62592 00:32:42.207 [2024-12-09 23:14:22.612114] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:32:43.580 00:32:43.580 real 0m4.995s 00:32:43.580 user 0m7.076s 00:32:43.580 sys 0m0.922s 00:32:43.580 ************************************ 00:32:43.580 END TEST raid_state_function_test 00:32:43.580 ************************************ 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.580 23:14:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:32:43.580 23:14:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:43.580 23:14:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.580 23:14:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:43.580 ************************************ 00:32:43.580 START TEST raid_state_function_test_sb 00:32:43.580 ************************************ 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62845 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:43.580 Process raid pid: 62845 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62845' 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62845 00:32:43.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62845 ']' 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.580 23:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.580 [2024-12-09 23:14:23.994359] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:43.580 [2024-12-09 23:14:23.994503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.580 [2024-12-09 23:14:24.178291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.838 [2024-12-09 23:14:24.306238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.097 [2024-12-09 23:14:24.534706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:44.097 [2024-12-09 23:14:24.534756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.356 [2024-12-09 23:14:24.843372] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:44.356 [2024-12-09 23:14:24.843464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:44.356 [2024-12-09 23:14:24.843478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:44.356 [2024-12-09 23:14:24.843492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.356 "name": "Existed_Raid", 00:32:44.356 "uuid": "a606ae10-4f9d-4f03-9d0f-8ac927b5a782", 00:32:44.356 "strip_size_kb": 0, 00:32:44.356 "state": "configuring", 00:32:44.356 "raid_level": "raid1", 00:32:44.356 "superblock": true, 00:32:44.356 "num_base_bdevs": 2, 00:32:44.356 "num_base_bdevs_discovered": 0, 00:32:44.356 "num_base_bdevs_operational": 2, 00:32:44.356 "base_bdevs_list": [ 00:32:44.356 { 00:32:44.356 "name": "BaseBdev1", 00:32:44.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.356 "is_configured": false, 00:32:44.356 "data_offset": 0, 00:32:44.356 "data_size": 0 00:32:44.356 }, 00:32:44.356 { 00:32:44.356 "name": "BaseBdev2", 00:32:44.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.356 "is_configured": false, 00:32:44.356 "data_offset": 0, 00:32:44.356 "data_size": 0 00:32:44.356 } 00:32:44.356 ] 00:32:44.356 }' 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.356 23:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.923 [2024-12-09 23:14:25.302669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:44.923 [2024-12-09 23:14:25.302712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.923 [2024-12-09 23:14:25.314644] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:44.923 [2024-12-09 23:14:25.314694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:44.923 [2024-12-09 23:14:25.314705] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:44.923 [2024-12-09 23:14:25.314721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.923 [2024-12-09 23:14:25.362434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:44.923 BaseBdev1 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.923 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.923 [ 00:32:44.923 { 00:32:44.923 "name": "BaseBdev1", 00:32:44.923 "aliases": [ 00:32:44.923 "95ca4f8e-ebfe-4a4c-a85f-021ff0ac4662" 00:32:44.923 ], 00:32:44.923 "product_name": "Malloc disk", 00:32:44.923 "block_size": 512, 00:32:44.923 "num_blocks": 65536, 00:32:44.923 "uuid": "95ca4f8e-ebfe-4a4c-a85f-021ff0ac4662", 00:32:44.923 "assigned_rate_limits": { 00:32:44.924 "rw_ios_per_sec": 0, 00:32:44.924 "rw_mbytes_per_sec": 0, 00:32:44.924 "r_mbytes_per_sec": 0, 00:32:44.924 "w_mbytes_per_sec": 0 00:32:44.924 }, 00:32:44.924 "claimed": true, 00:32:44.924 "claim_type": "exclusive_write", 00:32:44.924 "zoned": false, 00:32:44.924 "supported_io_types": { 00:32:44.924 "read": true, 00:32:44.924 "write": true, 00:32:44.924 "unmap": true, 00:32:44.924 "flush": true, 00:32:44.924 "reset": true, 00:32:44.924 "nvme_admin": false, 00:32:44.924 "nvme_io": false, 00:32:44.924 "nvme_io_md": false, 00:32:44.924 "write_zeroes": true, 00:32:44.924 "zcopy": true, 00:32:44.924 "get_zone_info": false, 00:32:44.924 "zone_management": false, 00:32:44.924 "zone_append": false, 00:32:44.924 "compare": false, 00:32:44.924 "compare_and_write": false, 00:32:44.924 "abort": true, 00:32:44.924 "seek_hole": false, 00:32:44.924 "seek_data": false, 00:32:44.924 "copy": true, 00:32:44.924 "nvme_iov_md": false 00:32:44.924 }, 00:32:44.924 "memory_domains": [ 00:32:44.924 { 00:32:44.924 "dma_device_id": "system", 00:32:44.924 "dma_device_type": 1 00:32:44.924 }, 00:32:44.924 { 00:32:44.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:44.924 "dma_device_type": 2 00:32:44.924 } 00:32:44.924 ], 00:32:44.924 "driver_specific": {} 00:32:44.924 } 00:32:44.924 ] 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.924 "name": "Existed_Raid", 00:32:44.924 "uuid": "689996a6-ef8d-4ab8-b218-4bb4a8dd7306", 00:32:44.924 "strip_size_kb": 0, 00:32:44.924 "state": "configuring", 00:32:44.924 "raid_level": "raid1", 00:32:44.924 "superblock": true, 00:32:44.924 "num_base_bdevs": 2, 00:32:44.924 "num_base_bdevs_discovered": 1, 00:32:44.924 "num_base_bdevs_operational": 2, 00:32:44.924 "base_bdevs_list": [ 00:32:44.924 { 00:32:44.924 "name": "BaseBdev1", 00:32:44.924 "uuid": "95ca4f8e-ebfe-4a4c-a85f-021ff0ac4662", 00:32:44.924 "is_configured": true, 00:32:44.924 "data_offset": 2048, 00:32:44.924 "data_size": 63488 00:32:44.924 }, 00:32:44.924 { 00:32:44.924 "name": "BaseBdev2", 00:32:44.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.924 "is_configured": false, 00:32:44.924 "data_offset": 0, 00:32:44.924 "data_size": 0 00:32:44.924 } 00:32:44.924 ] 00:32:44.924 }' 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.924 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.494 [2024-12-09 23:14:25.842261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:45.494 [2024-12-09 23:14:25.842323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.494 [2024-12-09 23:14:25.854302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:45.494 [2024-12-09 23:14:25.856496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:45.494 [2024-12-09 23:14:25.856541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.494 "name": "Existed_Raid", 00:32:45.494 "uuid": "57594fd1-4c60-42bf-8f76-fb3ead367f96", 00:32:45.494 "strip_size_kb": 0, 00:32:45.494 "state": "configuring", 00:32:45.494 "raid_level": "raid1", 00:32:45.494 "superblock": true, 00:32:45.494 "num_base_bdevs": 2, 00:32:45.494 "num_base_bdevs_discovered": 1, 00:32:45.494 "num_base_bdevs_operational": 2, 00:32:45.494 "base_bdevs_list": [ 00:32:45.494 { 00:32:45.494 "name": "BaseBdev1", 00:32:45.494 "uuid": "95ca4f8e-ebfe-4a4c-a85f-021ff0ac4662", 00:32:45.494 "is_configured": true, 00:32:45.494 "data_offset": 2048, 00:32:45.494 "data_size": 63488 00:32:45.494 }, 00:32:45.494 { 00:32:45.494 "name": "BaseBdev2", 00:32:45.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.494 "is_configured": false, 00:32:45.494 "data_offset": 0, 00:32:45.494 "data_size": 0 00:32:45.494 } 00:32:45.494 ] 00:32:45.494 }' 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.494 23:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.752 [2024-12-09 23:14:26.338294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:45.752 [2024-12-09 23:14:26.338669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:45.752 [2024-12-09 23:14:26.338690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:45.752 [2024-12-09 23:14:26.338991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:45.752 [2024-12-09 23:14:26.339160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:45.752 [2024-12-09 23:14:26.339177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:45.752 BaseBdev2 00:32:45.752 [2024-12-09 23:14:26.339347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.752 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.752 [ 00:32:45.752 { 00:32:45.752 "name": "BaseBdev2", 00:32:45.752 "aliases": [ 00:32:45.752 "cf7e6108-f0e0-42a8-8e8b-a180296f69b2" 00:32:45.752 ], 00:32:45.752 "product_name": "Malloc disk", 00:32:45.752 "block_size": 512, 00:32:45.752 "num_blocks": 65536, 00:32:45.752 "uuid": "cf7e6108-f0e0-42a8-8e8b-a180296f69b2", 00:32:45.752 "assigned_rate_limits": { 00:32:45.752 "rw_ios_per_sec": 0, 00:32:45.752 "rw_mbytes_per_sec": 0, 00:32:45.752 "r_mbytes_per_sec": 0, 00:32:45.752 "w_mbytes_per_sec": 0 00:32:45.752 }, 00:32:45.752 "claimed": true, 00:32:45.752 "claim_type": "exclusive_write", 00:32:45.752 "zoned": false, 00:32:45.752 "supported_io_types": { 00:32:45.752 "read": true, 00:32:45.752 "write": true, 00:32:45.752 "unmap": true, 00:32:45.752 "flush": true, 00:32:45.752 "reset": true, 00:32:45.752 "nvme_admin": false, 00:32:45.752 "nvme_io": false, 00:32:45.752 "nvme_io_md": false, 00:32:45.752 "write_zeroes": true, 00:32:45.752 "zcopy": true, 00:32:45.752 "get_zone_info": false, 00:32:45.752 "zone_management": false, 00:32:45.752 "zone_append": false, 00:32:45.752 "compare": false, 00:32:45.752 "compare_and_write": false, 00:32:45.752 "abort": true, 00:32:45.752 "seek_hole": false, 00:32:45.752 "seek_data": false, 00:32:45.752 "copy": true, 00:32:45.752 "nvme_iov_md": false 00:32:45.752 }, 00:32:45.752 "memory_domains": [ 00:32:45.752 { 00:32:45.752 "dma_device_id": "system", 00:32:45.752 "dma_device_type": 1 00:32:45.752 }, 00:32:45.752 { 00:32:45.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.009 "dma_device_type": 2 00:32:46.009 } 00:32:46.009 ], 00:32:46.009 "driver_specific": {} 00:32:46.009 } 00:32:46.009 ] 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:46.009 "name": "Existed_Raid", 00:32:46.009 "uuid": "57594fd1-4c60-42bf-8f76-fb3ead367f96", 00:32:46.009 "strip_size_kb": 0, 00:32:46.009 "state": "online", 00:32:46.009 "raid_level": "raid1", 00:32:46.009 "superblock": true, 00:32:46.009 "num_base_bdevs": 2, 00:32:46.009 "num_base_bdevs_discovered": 2, 00:32:46.009 "num_base_bdevs_operational": 2, 00:32:46.009 "base_bdevs_list": [ 00:32:46.009 { 00:32:46.009 "name": "BaseBdev1", 00:32:46.009 "uuid": "95ca4f8e-ebfe-4a4c-a85f-021ff0ac4662", 00:32:46.009 "is_configured": true, 00:32:46.009 "data_offset": 2048, 00:32:46.009 "data_size": 63488 00:32:46.009 }, 00:32:46.009 { 00:32:46.009 "name": "BaseBdev2", 00:32:46.009 "uuid": "cf7e6108-f0e0-42a8-8e8b-a180296f69b2", 00:32:46.009 "is_configured": true, 00:32:46.009 "data_offset": 2048, 00:32:46.009 "data_size": 63488 00:32:46.009 } 00:32:46.009 ] 00:32:46.009 }' 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:46.009 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.267 [2024-12-09 23:14:26.826538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:46.267 "name": "Existed_Raid", 00:32:46.267 "aliases": [ 00:32:46.267 "57594fd1-4c60-42bf-8f76-fb3ead367f96" 00:32:46.267 ], 00:32:46.267 "product_name": "Raid Volume", 00:32:46.267 "block_size": 512, 00:32:46.267 "num_blocks": 63488, 00:32:46.267 "uuid": "57594fd1-4c60-42bf-8f76-fb3ead367f96", 00:32:46.267 "assigned_rate_limits": { 00:32:46.267 "rw_ios_per_sec": 0, 00:32:46.267 "rw_mbytes_per_sec": 0, 00:32:46.267 "r_mbytes_per_sec": 0, 00:32:46.267 "w_mbytes_per_sec": 0 00:32:46.267 }, 00:32:46.267 "claimed": false, 00:32:46.267 "zoned": false, 00:32:46.267 "supported_io_types": { 00:32:46.267 "read": true, 00:32:46.267 "write": true, 00:32:46.267 "unmap": false, 00:32:46.267 "flush": false, 00:32:46.267 "reset": true, 00:32:46.267 "nvme_admin": false, 00:32:46.267 "nvme_io": false, 00:32:46.267 "nvme_io_md": false, 00:32:46.267 "write_zeroes": true, 00:32:46.267 "zcopy": false, 00:32:46.267 "get_zone_info": false, 00:32:46.267 "zone_management": false, 00:32:46.267 "zone_append": false, 00:32:46.267 "compare": false, 00:32:46.267 "compare_and_write": false, 00:32:46.267 "abort": false, 00:32:46.267 "seek_hole": false, 00:32:46.267 "seek_data": false, 00:32:46.267 "copy": false, 00:32:46.267 "nvme_iov_md": false 00:32:46.267 }, 00:32:46.267 "memory_domains": [ 00:32:46.267 { 00:32:46.267 "dma_device_id": "system", 00:32:46.267 "dma_device_type": 1 00:32:46.267 }, 00:32:46.267 { 00:32:46.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.267 "dma_device_type": 2 00:32:46.267 }, 00:32:46.267 { 00:32:46.267 "dma_device_id": "system", 00:32:46.267 "dma_device_type": 1 00:32:46.267 }, 00:32:46.267 { 00:32:46.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.267 "dma_device_type": 2 00:32:46.267 } 00:32:46.267 ], 00:32:46.267 "driver_specific": { 00:32:46.267 "raid": { 00:32:46.267 "uuid": "57594fd1-4c60-42bf-8f76-fb3ead367f96", 00:32:46.267 "strip_size_kb": 0, 00:32:46.267 "state": "online", 00:32:46.267 "raid_level": "raid1", 00:32:46.267 "superblock": true, 00:32:46.267 "num_base_bdevs": 2, 00:32:46.267 "num_base_bdevs_discovered": 2, 00:32:46.267 "num_base_bdevs_operational": 2, 00:32:46.267 "base_bdevs_list": [ 00:32:46.267 { 00:32:46.267 "name": "BaseBdev1", 00:32:46.267 "uuid": "95ca4f8e-ebfe-4a4c-a85f-021ff0ac4662", 00:32:46.267 "is_configured": true, 00:32:46.267 "data_offset": 2048, 00:32:46.267 "data_size": 63488 00:32:46.267 }, 00:32:46.267 { 00:32:46.267 "name": "BaseBdev2", 00:32:46.267 "uuid": "cf7e6108-f0e0-42a8-8e8b-a180296f69b2", 00:32:46.267 "is_configured": true, 00:32:46.267 "data_offset": 2048, 00:32:46.267 "data_size": 63488 00:32:46.267 } 00:32:46.267 ] 00:32:46.267 } 00:32:46.267 } 00:32:46.267 }' 00:32:46.267 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:46.563 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:46.563 BaseBdev2' 00:32:46.563 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.563 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:46.563 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:46.563 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:46.563 23:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.563 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.563 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.563 23:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.563 [2024-12-09 23:14:27.042270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:32:46.563 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:46.564 "name": "Existed_Raid", 00:32:46.564 "uuid": "57594fd1-4c60-42bf-8f76-fb3ead367f96", 00:32:46.564 "strip_size_kb": 0, 00:32:46.564 "state": "online", 00:32:46.564 "raid_level": "raid1", 00:32:46.564 "superblock": true, 00:32:46.564 "num_base_bdevs": 2, 00:32:46.564 "num_base_bdevs_discovered": 1, 00:32:46.564 "num_base_bdevs_operational": 1, 00:32:46.564 "base_bdevs_list": [ 00:32:46.564 { 00:32:46.564 "name": null, 00:32:46.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:46.564 "is_configured": false, 00:32:46.564 "data_offset": 0, 00:32:46.564 "data_size": 63488 00:32:46.564 }, 00:32:46.564 { 00:32:46.564 "name": "BaseBdev2", 00:32:46.564 "uuid": "cf7e6108-f0e0-42a8-8e8b-a180296f69b2", 00:32:46.564 "is_configured": true, 00:32:46.564 "data_offset": 2048, 00:32:46.564 "data_size": 63488 00:32:46.564 } 00:32:46.564 ] 00:32:46.564 }' 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:46.564 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:47.130 [2024-12-09 23:14:27.592686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:47.130 [2024-12-09 23:14:27.592790] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:47.130 [2024-12-09 23:14:27.691468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:47.130 [2024-12-09 23:14:27.691682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:47.130 [2024-12-09 23:14:27.691843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62845 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62845 ']' 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62845 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.130 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62845 00:32:47.388 killing process with pid 62845 00:32:47.388 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.388 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.388 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62845' 00:32:47.388 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62845 00:32:47.388 [2024-12-09 23:14:27.776840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:47.388 23:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62845 00:32:47.388 [2024-12-09 23:14:27.794971] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:48.323 23:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:48.323 00:32:48.323 real 0m5.074s 00:32:48.323 user 0m7.236s 00:32:48.323 sys 0m0.913s 00:32:48.581 23:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.581 23:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:48.581 ************************************ 00:32:48.581 END TEST raid_state_function_test_sb 00:32:48.581 ************************************ 00:32:48.581 23:14:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:32:48.581 23:14:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:48.581 23:14:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:48.581 23:14:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:48.581 ************************************ 00:32:48.581 START TEST raid_superblock_test 00:32:48.581 ************************************ 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:32:48.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63097 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63097 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63097 ']' 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.581 23:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:48.582 23:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.582 [2024-12-09 23:14:29.130131] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:48.582 [2024-12-09 23:14:29.130263] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63097 ] 00:32:48.841 [2024-12-09 23:14:29.311460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.841 [2024-12-09 23:14:29.434638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.098 [2024-12-09 23:14:29.647574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:49.098 [2024-12-09 23:14:29.647637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.664 malloc1 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.664 [2024-12-09 23:14:30.071553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:49.664 [2024-12-09 23:14:30.071622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.664 [2024-12-09 23:14:30.071649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:49.664 [2024-12-09 23:14:30.071661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.664 [2024-12-09 23:14:30.074148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.664 [2024-12-09 23:14:30.074209] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:49.664 pt1 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.664 malloc2 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.664 [2024-12-09 23:14:30.125752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:49.664 [2024-12-09 23:14:30.125918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.664 [2024-12-09 23:14:30.125953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:49.664 [2024-12-09 23:14:30.125966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.664 [2024-12-09 23:14:30.128536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.664 [2024-12-09 23:14:30.128576] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:49.664 pt2 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.664 [2024-12-09 23:14:30.137789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:49.664 [2024-12-09 23:14:30.139973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:49.664 [2024-12-09 23:14:30.140171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:49.664 [2024-12-09 23:14:30.140191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:49.664 [2024-12-09 23:14:30.140521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:49.664 [2024-12-09 23:14:30.140686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:49.664 [2024-12-09 23:14:30.140705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:49.664 [2024-12-09 23:14:30.140857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:49.664 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.665 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.665 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.665 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.665 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.665 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.665 "name": "raid_bdev1", 00:32:49.665 "uuid": "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f", 00:32:49.665 "strip_size_kb": 0, 00:32:49.665 "state": "online", 00:32:49.665 "raid_level": "raid1", 00:32:49.665 "superblock": true, 00:32:49.665 "num_base_bdevs": 2, 00:32:49.665 "num_base_bdevs_discovered": 2, 00:32:49.665 "num_base_bdevs_operational": 2, 00:32:49.665 "base_bdevs_list": [ 00:32:49.665 { 00:32:49.665 "name": "pt1", 00:32:49.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:49.665 "is_configured": true, 00:32:49.665 "data_offset": 2048, 00:32:49.665 "data_size": 63488 00:32:49.665 }, 00:32:49.665 { 00:32:49.665 "name": "pt2", 00:32:49.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:49.665 "is_configured": true, 00:32:49.665 "data_offset": 2048, 00:32:49.665 "data_size": 63488 00:32:49.665 } 00:32:49.665 ] 00:32:49.665 }' 00:32:49.665 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.665 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:50.230 [2024-12-09 23:14:30.565562] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:50.230 "name": "raid_bdev1", 00:32:50.230 "aliases": [ 00:32:50.230 "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f" 00:32:50.230 ], 00:32:50.230 "product_name": "Raid Volume", 00:32:50.230 "block_size": 512, 00:32:50.230 "num_blocks": 63488, 00:32:50.230 "uuid": "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f", 00:32:50.230 "assigned_rate_limits": { 00:32:50.230 "rw_ios_per_sec": 0, 00:32:50.230 "rw_mbytes_per_sec": 0, 00:32:50.230 "r_mbytes_per_sec": 0, 00:32:50.230 "w_mbytes_per_sec": 0 00:32:50.230 }, 00:32:50.230 "claimed": false, 00:32:50.230 "zoned": false, 00:32:50.230 "supported_io_types": { 00:32:50.230 "read": true, 00:32:50.230 "write": true, 00:32:50.230 "unmap": false, 00:32:50.230 "flush": false, 00:32:50.230 "reset": true, 00:32:50.230 "nvme_admin": false, 00:32:50.230 "nvme_io": false, 00:32:50.230 "nvme_io_md": false, 00:32:50.230 "write_zeroes": true, 00:32:50.230 "zcopy": false, 00:32:50.230 "get_zone_info": false, 00:32:50.230 "zone_management": false, 00:32:50.230 "zone_append": false, 00:32:50.230 "compare": false, 00:32:50.230 "compare_and_write": false, 00:32:50.230 "abort": false, 00:32:50.230 "seek_hole": false, 00:32:50.230 "seek_data": false, 00:32:50.230 "copy": false, 00:32:50.230 "nvme_iov_md": false 00:32:50.230 }, 00:32:50.230 "memory_domains": [ 00:32:50.230 { 00:32:50.230 "dma_device_id": "system", 00:32:50.230 "dma_device_type": 1 00:32:50.230 }, 00:32:50.230 { 00:32:50.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:50.230 "dma_device_type": 2 00:32:50.230 }, 00:32:50.230 { 00:32:50.230 "dma_device_id": "system", 00:32:50.230 "dma_device_type": 1 00:32:50.230 }, 00:32:50.230 { 00:32:50.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:50.230 "dma_device_type": 2 00:32:50.230 } 00:32:50.230 ], 00:32:50.230 "driver_specific": { 00:32:50.230 "raid": { 00:32:50.230 "uuid": "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f", 00:32:50.230 "strip_size_kb": 0, 00:32:50.230 "state": "online", 00:32:50.230 "raid_level": "raid1", 00:32:50.230 "superblock": true, 00:32:50.230 "num_base_bdevs": 2, 00:32:50.230 "num_base_bdevs_discovered": 2, 00:32:50.230 "num_base_bdevs_operational": 2, 00:32:50.230 "base_bdevs_list": [ 00:32:50.230 { 00:32:50.230 "name": "pt1", 00:32:50.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:50.230 "is_configured": true, 00:32:50.230 "data_offset": 2048, 00:32:50.230 "data_size": 63488 00:32:50.230 }, 00:32:50.230 { 00:32:50.230 "name": "pt2", 00:32:50.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:50.230 "is_configured": true, 00:32:50.230 "data_offset": 2048, 00:32:50.230 "data_size": 63488 00:32:50.230 } 00:32:50.230 ] 00:32:50.230 } 00:32:50.230 } 00:32:50.230 }' 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:50.230 pt2' 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.230 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:50.231 [2024-12-09 23:14:30.781251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4e147d0b-6a67-40a2-b6bb-13ce57e5d49f 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4e147d0b-6a67-40a2-b6bb-13ce57e5d49f ']' 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.231 [2024-12-09 23:14:30.820828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:50.231 [2024-12-09 23:14:30.820959] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:50.231 [2024-12-09 23:14:30.821104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:50.231 [2024-12-09 23:14:30.821196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:50.231 [2024-12-09 23:14:30.821309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.231 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.490 [2024-12-09 23:14:30.944690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:50.490 [2024-12-09 23:14:30.946905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:50.490 [2024-12-09 23:14:30.946975] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:50.490 [2024-12-09 23:14:30.947036] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:50.490 [2024-12-09 23:14:30.947056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:50.490 [2024-12-09 23:14:30.947069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:50.490 request: 00:32:50.490 { 00:32:50.490 "name": "raid_bdev1", 00:32:50.490 "raid_level": "raid1", 00:32:50.490 "base_bdevs": [ 00:32:50.490 "malloc1", 00:32:50.490 "malloc2" 00:32:50.490 ], 00:32:50.490 "superblock": false, 00:32:50.490 "method": "bdev_raid_create", 00:32:50.490 "req_id": 1 00:32:50.490 } 00:32:50.490 Got JSON-RPC error response 00:32:50.490 response: 00:32:50.490 { 00:32:50.490 "code": -17, 00:32:50.490 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:50.490 } 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:50.490 23:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.490 [2024-12-09 23:14:31.012590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:50.490 [2024-12-09 23:14:31.012657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:50.490 [2024-12-09 23:14:31.012678] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:50.490 [2024-12-09 23:14:31.012692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:50.490 [2024-12-09 23:14:31.015420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:50.490 [2024-12-09 23:14:31.015565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:50.490 [2024-12-09 23:14:31.015675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:50.490 [2024-12-09 23:14:31.015750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:50.490 pt1 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.490 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.491 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.491 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:50.491 "name": "raid_bdev1", 00:32:50.491 "uuid": "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f", 00:32:50.491 "strip_size_kb": 0, 00:32:50.491 "state": "configuring", 00:32:50.491 "raid_level": "raid1", 00:32:50.491 "superblock": true, 00:32:50.491 "num_base_bdevs": 2, 00:32:50.491 "num_base_bdevs_discovered": 1, 00:32:50.491 "num_base_bdevs_operational": 2, 00:32:50.491 "base_bdevs_list": [ 00:32:50.491 { 00:32:50.491 "name": "pt1", 00:32:50.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:50.491 "is_configured": true, 00:32:50.491 "data_offset": 2048, 00:32:50.491 "data_size": 63488 00:32:50.491 }, 00:32:50.491 { 00:32:50.491 "name": null, 00:32:50.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:50.491 "is_configured": false, 00:32:50.491 "data_offset": 2048, 00:32:50.491 "data_size": 63488 00:32:50.491 } 00:32:50.491 ] 00:32:50.491 }' 00:32:50.491 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:50.491 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.058 [2024-12-09 23:14:31.436035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:51.058 [2024-12-09 23:14:31.436243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.058 [2024-12-09 23:14:31.436304] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:51.058 [2024-12-09 23:14:31.436409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.058 [2024-12-09 23:14:31.436954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.058 [2024-12-09 23:14:31.437088] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:51.058 [2024-12-09 23:14:31.437261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:51.058 [2024-12-09 23:14:31.437406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:51.058 [2024-12-09 23:14:31.437589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:51.058 [2024-12-09 23:14:31.437701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:51.058 [2024-12-09 23:14:31.438014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:51.058 [2024-12-09 23:14:31.438271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:51.058 [2024-12-09 23:14:31.438374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:51.058 [2024-12-09 23:14:31.438616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:51.058 pt2 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.058 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.059 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:51.059 "name": "raid_bdev1", 00:32:51.059 "uuid": "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f", 00:32:51.059 "strip_size_kb": 0, 00:32:51.059 "state": "online", 00:32:51.059 "raid_level": "raid1", 00:32:51.059 "superblock": true, 00:32:51.059 "num_base_bdevs": 2, 00:32:51.059 "num_base_bdevs_discovered": 2, 00:32:51.059 "num_base_bdevs_operational": 2, 00:32:51.059 "base_bdevs_list": [ 00:32:51.059 { 00:32:51.059 "name": "pt1", 00:32:51.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:51.059 "is_configured": true, 00:32:51.059 "data_offset": 2048, 00:32:51.059 "data_size": 63488 00:32:51.059 }, 00:32:51.059 { 00:32:51.059 "name": "pt2", 00:32:51.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:51.059 "is_configured": true, 00:32:51.059 "data_offset": 2048, 00:32:51.059 "data_size": 63488 00:32:51.059 } 00:32:51.059 ] 00:32:51.059 }' 00:32:51.059 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:51.059 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.318 [2024-12-09 23:14:31.847711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.318 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:51.318 "name": "raid_bdev1", 00:32:51.318 "aliases": [ 00:32:51.318 "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f" 00:32:51.318 ], 00:32:51.318 "product_name": "Raid Volume", 00:32:51.318 "block_size": 512, 00:32:51.318 "num_blocks": 63488, 00:32:51.318 "uuid": "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f", 00:32:51.318 "assigned_rate_limits": { 00:32:51.318 "rw_ios_per_sec": 0, 00:32:51.318 "rw_mbytes_per_sec": 0, 00:32:51.318 "r_mbytes_per_sec": 0, 00:32:51.318 "w_mbytes_per_sec": 0 00:32:51.318 }, 00:32:51.318 "claimed": false, 00:32:51.318 "zoned": false, 00:32:51.318 "supported_io_types": { 00:32:51.318 "read": true, 00:32:51.318 "write": true, 00:32:51.318 "unmap": false, 00:32:51.318 "flush": false, 00:32:51.318 "reset": true, 00:32:51.318 "nvme_admin": false, 00:32:51.318 "nvme_io": false, 00:32:51.318 "nvme_io_md": false, 00:32:51.318 "write_zeroes": true, 00:32:51.318 "zcopy": false, 00:32:51.318 "get_zone_info": false, 00:32:51.318 "zone_management": false, 00:32:51.318 "zone_append": false, 00:32:51.318 "compare": false, 00:32:51.318 "compare_and_write": false, 00:32:51.318 "abort": false, 00:32:51.319 "seek_hole": false, 00:32:51.319 "seek_data": false, 00:32:51.319 "copy": false, 00:32:51.319 "nvme_iov_md": false 00:32:51.319 }, 00:32:51.319 "memory_domains": [ 00:32:51.319 { 00:32:51.319 "dma_device_id": "system", 00:32:51.319 "dma_device_type": 1 00:32:51.319 }, 00:32:51.319 { 00:32:51.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:51.319 "dma_device_type": 2 00:32:51.319 }, 00:32:51.319 { 00:32:51.319 "dma_device_id": "system", 00:32:51.319 "dma_device_type": 1 00:32:51.319 }, 00:32:51.319 { 00:32:51.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:51.319 "dma_device_type": 2 00:32:51.319 } 00:32:51.319 ], 00:32:51.319 "driver_specific": { 00:32:51.319 "raid": { 00:32:51.319 "uuid": "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f", 00:32:51.319 "strip_size_kb": 0, 00:32:51.319 "state": "online", 00:32:51.319 "raid_level": "raid1", 00:32:51.319 "superblock": true, 00:32:51.319 "num_base_bdevs": 2, 00:32:51.319 "num_base_bdevs_discovered": 2, 00:32:51.319 "num_base_bdevs_operational": 2, 00:32:51.319 "base_bdevs_list": [ 00:32:51.319 { 00:32:51.319 "name": "pt1", 00:32:51.319 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:51.319 "is_configured": true, 00:32:51.319 "data_offset": 2048, 00:32:51.319 "data_size": 63488 00:32:51.319 }, 00:32:51.319 { 00:32:51.319 "name": "pt2", 00:32:51.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:51.319 "is_configured": true, 00:32:51.319 "data_offset": 2048, 00:32:51.319 "data_size": 63488 00:32:51.319 } 00:32:51.319 ] 00:32:51.319 } 00:32:51.319 } 00:32:51.319 }' 00:32:51.319 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:51.319 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:51.319 pt2' 00:32:51.319 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:51.577 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:51.577 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:51.577 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:51.577 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.577 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.577 23:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:51.577 23:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.577 [2024-12-09 23:14:32.087424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4e147d0b-6a67-40a2-b6bb-13ce57e5d49f '!=' 4e147d0b-6a67-40a2-b6bb-13ce57e5d49f ']' 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:32:51.577 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.578 [2024-12-09 23:14:32.115173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:51.578 "name": "raid_bdev1", 00:32:51.578 "uuid": "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f", 00:32:51.578 "strip_size_kb": 0, 00:32:51.578 "state": "online", 00:32:51.578 "raid_level": "raid1", 00:32:51.578 "superblock": true, 00:32:51.578 "num_base_bdevs": 2, 00:32:51.578 "num_base_bdevs_discovered": 1, 00:32:51.578 "num_base_bdevs_operational": 1, 00:32:51.578 "base_bdevs_list": [ 00:32:51.578 { 00:32:51.578 "name": null, 00:32:51.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:51.578 "is_configured": false, 00:32:51.578 "data_offset": 0, 00:32:51.578 "data_size": 63488 00:32:51.578 }, 00:32:51.578 { 00:32:51.578 "name": "pt2", 00:32:51.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:51.578 "is_configured": true, 00:32:51.578 "data_offset": 2048, 00:32:51.578 "data_size": 63488 00:32:51.578 } 00:32:51.578 ] 00:32:51.578 }' 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:51.578 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.146 [2024-12-09 23:14:32.562536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:52.146 [2024-12-09 23:14:32.562571] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:52.146 [2024-12-09 23:14:32.562657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:52.146 [2024-12-09 23:14:32.562708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:52.146 [2024-12-09 23:14:32.562723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.146 [2024-12-09 23:14:32.630449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:52.146 [2024-12-09 23:14:32.630522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.146 [2024-12-09 23:14:32.630544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:52.146 [2024-12-09 23:14:32.630560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.146 [2024-12-09 23:14:32.633112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.146 [2024-12-09 23:14:32.633158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:52.146 [2024-12-09 23:14:32.633254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:52.146 [2024-12-09 23:14:32.633311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:52.146 [2024-12-09 23:14:32.633428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:52.146 [2024-12-09 23:14:32.633462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:52.146 [2024-12-09 23:14:32.633723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:52.146 [2024-12-09 23:14:32.633886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:52.146 [2024-12-09 23:14:32.633897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:32:52.146 [2024-12-09 23:14:32.634052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:52.146 pt2 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.146 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:52.146 "name": "raid_bdev1", 00:32:52.146 "uuid": "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f", 00:32:52.146 "strip_size_kb": 0, 00:32:52.146 "state": "online", 00:32:52.146 "raid_level": "raid1", 00:32:52.146 "superblock": true, 00:32:52.146 "num_base_bdevs": 2, 00:32:52.146 "num_base_bdevs_discovered": 1, 00:32:52.146 "num_base_bdevs_operational": 1, 00:32:52.146 "base_bdevs_list": [ 00:32:52.146 { 00:32:52.146 "name": null, 00:32:52.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.146 "is_configured": false, 00:32:52.146 "data_offset": 2048, 00:32:52.146 "data_size": 63488 00:32:52.146 }, 00:32:52.146 { 00:32:52.147 "name": "pt2", 00:32:52.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:52.147 "is_configured": true, 00:32:52.147 "data_offset": 2048, 00:32:52.147 "data_size": 63488 00:32:52.147 } 00:32:52.147 ] 00:32:52.147 }' 00:32:52.147 23:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:52.147 23:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.714 [2024-12-09 23:14:33.062118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:52.714 [2024-12-09 23:14:33.062171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:52.714 [2024-12-09 23:14:33.062252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:52.714 [2024-12-09 23:14:33.062307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:52.714 [2024-12-09 23:14:33.062319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.714 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.714 [2024-12-09 23:14:33.106064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:52.714 [2024-12-09 23:14:33.106134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.714 [2024-12-09 23:14:33.106165] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:52.714 [2024-12-09 23:14:33.106179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.714 [2024-12-09 23:14:33.108752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.714 [2024-12-09 23:14:33.108787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:52.714 [2024-12-09 23:14:33.108874] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:52.714 [2024-12-09 23:14:33.108920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:52.714 [2024-12-09 23:14:33.109113] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:52.715 [2024-12-09 23:14:33.109127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:52.715 [2024-12-09 23:14:33.109145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:32:52.715 [2024-12-09 23:14:33.109199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:52.715 [2024-12-09 23:14:33.109282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:32:52.715 [2024-12-09 23:14:33.109293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:52.715 [2024-12-09 23:14:33.109588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:52.715 [2024-12-09 23:14:33.109732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:32:52.715 [2024-12-09 23:14:33.109747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:32:52.715 [2024-12-09 23:14:33.109908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:52.715 pt1 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:52.715 "name": "raid_bdev1", 00:32:52.715 "uuid": "4e147d0b-6a67-40a2-b6bb-13ce57e5d49f", 00:32:52.715 "strip_size_kb": 0, 00:32:52.715 "state": "online", 00:32:52.715 "raid_level": "raid1", 00:32:52.715 "superblock": true, 00:32:52.715 "num_base_bdevs": 2, 00:32:52.715 "num_base_bdevs_discovered": 1, 00:32:52.715 "num_base_bdevs_operational": 1, 00:32:52.715 "base_bdevs_list": [ 00:32:52.715 { 00:32:52.715 "name": null, 00:32:52.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.715 "is_configured": false, 00:32:52.715 "data_offset": 2048, 00:32:52.715 "data_size": 63488 00:32:52.715 }, 00:32:52.715 { 00:32:52.715 "name": "pt2", 00:32:52.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:52.715 "is_configured": true, 00:32:52.715 "data_offset": 2048, 00:32:52.715 "data_size": 63488 00:32:52.715 } 00:32:52.715 ] 00:32:52.715 }' 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:52.715 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.974 [2024-12-09 23:14:33.569661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:52.974 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4e147d0b-6a67-40a2-b6bb-13ce57e5d49f '!=' 4e147d0b-6a67-40a2-b6bb-13ce57e5d49f ']' 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63097 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63097 ']' 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63097 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63097 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:53.233 killing process with pid 63097 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63097' 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63097 00:32:53.233 [2024-12-09 23:14:33.644004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:53.233 [2024-12-09 23:14:33.644105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:53.233 23:14:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63097 00:32:53.233 [2024-12-09 23:14:33.644155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:53.233 [2024-12-09 23:14:33.644173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:32:53.233 [2024-12-09 23:14:33.857500] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:54.612 23:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:54.612 00:32:54.612 real 0m5.964s 00:32:54.612 user 0m9.000s 00:32:54.612 sys 0m1.124s 00:32:54.612 23:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.612 23:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.612 ************************************ 00:32:54.612 END TEST raid_superblock_test 00:32:54.612 ************************************ 00:32:54.612 23:14:35 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:32:54.612 23:14:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:54.612 23:14:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:54.612 23:14:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:54.612 ************************************ 00:32:54.612 START TEST raid_read_error_test 00:32:54.612 ************************************ 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:54.612 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vQSYj2jTa5 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63422 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63422 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63422 ']' 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.613 23:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.613 [2024-12-09 23:14:35.184167] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:54.613 [2024-12-09 23:14:35.184291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63422 ] 00:32:54.871 [2024-12-09 23:14:35.364760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.871 [2024-12-09 23:14:35.477345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.130 [2024-12-09 23:14:35.687818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:55.130 [2024-12-09 23:14:35.687856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:55.389 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.389 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:55.389 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:55.389 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:55.389 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.648 BaseBdev1_malloc 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.648 true 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.648 [2024-12-09 23:14:36.090512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:55.648 [2024-12-09 23:14:36.090569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.648 [2024-12-09 23:14:36.090592] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:55.648 [2024-12-09 23:14:36.090606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.648 [2024-12-09 23:14:36.092933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.648 [2024-12-09 23:14:36.092975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:55.648 BaseBdev1 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.648 BaseBdev2_malloc 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.648 true 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.648 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.648 [2024-12-09 23:14:36.159646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:55.648 [2024-12-09 23:14:36.159700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.648 [2024-12-09 23:14:36.159719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:55.648 [2024-12-09 23:14:36.159733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.648 [2024-12-09 23:14:36.162074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.648 [2024-12-09 23:14:36.162125] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:55.648 BaseBdev2 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.649 [2024-12-09 23:14:36.171685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:55.649 [2024-12-09 23:14:36.173771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:55.649 [2024-12-09 23:14:36.173978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:55.649 [2024-12-09 23:14:36.173995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:55.649 [2024-12-09 23:14:36.174251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:55.649 [2024-12-09 23:14:36.174433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:55.649 [2024-12-09 23:14:36.174445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:55.649 [2024-12-09 23:14:36.174595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:55.649 "name": "raid_bdev1", 00:32:55.649 "uuid": "dec587a1-39ea-4790-8c78-6c397a66dbd7", 00:32:55.649 "strip_size_kb": 0, 00:32:55.649 "state": "online", 00:32:55.649 "raid_level": "raid1", 00:32:55.649 "superblock": true, 00:32:55.649 "num_base_bdevs": 2, 00:32:55.649 "num_base_bdevs_discovered": 2, 00:32:55.649 "num_base_bdevs_operational": 2, 00:32:55.649 "base_bdevs_list": [ 00:32:55.649 { 00:32:55.649 "name": "BaseBdev1", 00:32:55.649 "uuid": "689ed52b-d6b9-5443-a2ca-40d8277390c1", 00:32:55.649 "is_configured": true, 00:32:55.649 "data_offset": 2048, 00:32:55.649 "data_size": 63488 00:32:55.649 }, 00:32:55.649 { 00:32:55.649 "name": "BaseBdev2", 00:32:55.649 "uuid": "7bcd58e6-d5be-58a2-a004-b39dc00b5d70", 00:32:55.649 "is_configured": true, 00:32:55.649 "data_offset": 2048, 00:32:55.649 "data_size": 63488 00:32:55.649 } 00:32:55.649 ] 00:32:55.649 }' 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:55.649 23:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.219 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:56.219 23:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:56.219 [2024-12-09 23:14:36.676448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:57.209 "name": "raid_bdev1", 00:32:57.209 "uuid": "dec587a1-39ea-4790-8c78-6c397a66dbd7", 00:32:57.209 "strip_size_kb": 0, 00:32:57.209 "state": "online", 00:32:57.209 "raid_level": "raid1", 00:32:57.209 "superblock": true, 00:32:57.209 "num_base_bdevs": 2, 00:32:57.209 "num_base_bdevs_discovered": 2, 00:32:57.209 "num_base_bdevs_operational": 2, 00:32:57.209 "base_bdevs_list": [ 00:32:57.209 { 00:32:57.209 "name": "BaseBdev1", 00:32:57.209 "uuid": "689ed52b-d6b9-5443-a2ca-40d8277390c1", 00:32:57.209 "is_configured": true, 00:32:57.209 "data_offset": 2048, 00:32:57.209 "data_size": 63488 00:32:57.209 }, 00:32:57.209 { 00:32:57.209 "name": "BaseBdev2", 00:32:57.209 "uuid": "7bcd58e6-d5be-58a2-a004-b39dc00b5d70", 00:32:57.209 "is_configured": true, 00:32:57.209 "data_offset": 2048, 00:32:57.209 "data_size": 63488 00:32:57.209 } 00:32:57.209 ] 00:32:57.209 }' 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:57.209 23:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.467 23:14:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:57.467 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.467 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.467 [2024-12-09 23:14:38.014781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:57.467 [2024-12-09 23:14:38.014826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:57.467 [2024-12-09 23:14:38.017471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:57.467 [2024-12-09 23:14:38.017525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:57.467 [2024-12-09 23:14:38.017608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:57.467 [2024-12-09 23:14:38.017622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:57.467 { 00:32:57.467 "results": [ 00:32:57.467 { 00:32:57.467 "job": "raid_bdev1", 00:32:57.467 "core_mask": "0x1", 00:32:57.467 "workload": "randrw", 00:32:57.467 "percentage": 50, 00:32:57.467 "status": "finished", 00:32:57.467 "queue_depth": 1, 00:32:57.467 "io_size": 131072, 00:32:57.467 "runtime": 1.338467, 00:32:57.467 "iops": 17616.42236977079, 00:32:57.467 "mibps": 2202.0527962213487, 00:32:57.467 "io_failed": 0, 00:32:57.467 "io_timeout": 0, 00:32:57.467 "avg_latency_us": 54.001415663076415, 00:32:57.467 "min_latency_us": 24.983132530120482, 00:32:57.468 "max_latency_us": 1500.2216867469879 00:32:57.468 } 00:32:57.468 ], 00:32:57.468 "core_count": 1 00:32:57.468 } 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63422 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63422 ']' 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63422 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63422 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.468 killing process with pid 63422 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63422' 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63422 00:32:57.468 [2024-12-09 23:14:38.065963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:57.468 23:14:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63422 00:32:57.732 [2024-12-09 23:14:38.202087] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:59.110 23:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vQSYj2jTa5 00:32:59.110 23:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:59.110 23:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:59.110 23:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:32:59.110 23:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:32:59.110 23:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:59.110 23:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:32:59.110 23:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:32:59.110 00:32:59.110 real 0m4.347s 00:32:59.110 user 0m5.132s 00:32:59.110 sys 0m0.578s 00:32:59.110 23:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.110 23:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.110 ************************************ 00:32:59.110 END TEST raid_read_error_test 00:32:59.110 ************************************ 00:32:59.110 23:14:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:32:59.110 23:14:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:59.110 23:14:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.110 23:14:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:59.110 ************************************ 00:32:59.110 START TEST raid_write_error_test 00:32:59.110 ************************************ 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zXesv8c4zq 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63562 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63562 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63562 ']' 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.110 23:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.110 [2024-12-09 23:14:39.600622] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:32:59.110 [2024-12-09 23:14:39.600750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63562 ] 00:32:59.369 [2024-12-09 23:14:39.775300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.369 [2024-12-09 23:14:39.894762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.628 [2024-12-09 23:14:40.102084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:59.628 [2024-12-09 23:14:40.102159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.887 BaseBdev1_malloc 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.887 true 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.887 [2024-12-09 23:14:40.487589] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:59.887 [2024-12-09 23:14:40.487652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:59.887 [2024-12-09 23:14:40.487677] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:59.887 [2024-12-09 23:14:40.487692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:59.887 [2024-12-09 23:14:40.490209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:59.887 [2024-12-09 23:14:40.490253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:59.887 BaseBdev1 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.887 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.146 BaseBdev2_malloc 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.146 true 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.146 [2024-12-09 23:14:40.551781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:00.146 [2024-12-09 23:14:40.551839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:00.146 [2024-12-09 23:14:40.551860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:00.146 [2024-12-09 23:14:40.551874] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:00.146 [2024-12-09 23:14:40.554275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:00.146 [2024-12-09 23:14:40.554315] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:00.146 BaseBdev2 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.146 [2024-12-09 23:14:40.563832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:00.146 [2024-12-09 23:14:40.565923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:00.146 [2024-12-09 23:14:40.566141] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:00.146 [2024-12-09 23:14:40.566159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:00.146 [2024-12-09 23:14:40.566432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:00.146 [2024-12-09 23:14:40.566621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:00.146 [2024-12-09 23:14:40.566632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:00.146 [2024-12-09 23:14:40.566790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.146 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:00.146 "name": "raid_bdev1", 00:33:00.146 "uuid": "52edeaa1-9432-4bef-918b-3054bd7021b0", 00:33:00.146 "strip_size_kb": 0, 00:33:00.146 "state": "online", 00:33:00.146 "raid_level": "raid1", 00:33:00.146 "superblock": true, 00:33:00.146 "num_base_bdevs": 2, 00:33:00.146 "num_base_bdevs_discovered": 2, 00:33:00.146 "num_base_bdevs_operational": 2, 00:33:00.147 "base_bdevs_list": [ 00:33:00.147 { 00:33:00.147 "name": "BaseBdev1", 00:33:00.147 "uuid": "f60bb447-88b4-51b0-bd0d-5bab241d101f", 00:33:00.147 "is_configured": true, 00:33:00.147 "data_offset": 2048, 00:33:00.147 "data_size": 63488 00:33:00.147 }, 00:33:00.147 { 00:33:00.147 "name": "BaseBdev2", 00:33:00.147 "uuid": "a08ae5d7-02c5-5f41-a024-cb97601ddf91", 00:33:00.147 "is_configured": true, 00:33:00.147 "data_offset": 2048, 00:33:00.147 "data_size": 63488 00:33:00.147 } 00:33:00.147 ] 00:33:00.147 }' 00:33:00.147 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:00.147 23:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.406 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:00.406 23:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:00.663 [2024-12-09 23:14:41.076444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.598 [2024-12-09 23:14:41.985398] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:33:01.598 [2024-12-09 23:14:41.985493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:01.598 [2024-12-09 23:14:41.985693] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.598 23:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.599 23:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.599 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.599 23:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:01.599 "name": "raid_bdev1", 00:33:01.599 "uuid": "52edeaa1-9432-4bef-918b-3054bd7021b0", 00:33:01.599 "strip_size_kb": 0, 00:33:01.599 "state": "online", 00:33:01.599 "raid_level": "raid1", 00:33:01.599 "superblock": true, 00:33:01.599 "num_base_bdevs": 2, 00:33:01.599 "num_base_bdevs_discovered": 1, 00:33:01.599 "num_base_bdevs_operational": 1, 00:33:01.599 "base_bdevs_list": [ 00:33:01.599 { 00:33:01.599 "name": null, 00:33:01.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.599 "is_configured": false, 00:33:01.599 "data_offset": 0, 00:33:01.599 "data_size": 63488 00:33:01.599 }, 00:33:01.599 { 00:33:01.599 "name": "BaseBdev2", 00:33:01.599 "uuid": "a08ae5d7-02c5-5f41-a024-cb97601ddf91", 00:33:01.599 "is_configured": true, 00:33:01.599 "data_offset": 2048, 00:33:01.599 "data_size": 63488 00:33:01.599 } 00:33:01.599 ] 00:33:01.599 }' 00:33:01.599 23:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:01.599 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.858 [2024-12-09 23:14:42.391785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:01.858 [2024-12-09 23:14:42.391825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:01.858 [2024-12-09 23:14:42.394644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:01.858 [2024-12-09 23:14:42.394697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:01.858 [2024-12-09 23:14:42.394762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:01.858 [2024-12-09 23:14:42.394778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:01.858 { 00:33:01.858 "results": [ 00:33:01.858 { 00:33:01.858 "job": "raid_bdev1", 00:33:01.858 "core_mask": "0x1", 00:33:01.858 "workload": "randrw", 00:33:01.858 "percentage": 50, 00:33:01.858 "status": "finished", 00:33:01.858 "queue_depth": 1, 00:33:01.858 "io_size": 131072, 00:33:01.858 "runtime": 1.315414, 00:33:01.858 "iops": 19801.370519091328, 00:33:01.858 "mibps": 2475.171314886416, 00:33:01.858 "io_failed": 0, 00:33:01.858 "io_timeout": 0, 00:33:01.858 "avg_latency_us": 47.67499171639528, 00:33:01.858 "min_latency_us": 23.955020080321287, 00:33:01.858 "max_latency_us": 1460.7421686746989 00:33:01.858 } 00:33:01.858 ], 00:33:01.858 "core_count": 1 00:33:01.858 } 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63562 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63562 ']' 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63562 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63562 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:01.858 killing process with pid 63562 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63562' 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63562 00:33:01.858 [2024-12-09 23:14:42.445537] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:01.858 23:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63562 00:33:02.116 [2024-12-09 23:14:42.591535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:03.496 23:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zXesv8c4zq 00:33:03.496 23:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:03.496 23:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:03.496 23:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:33:03.496 23:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:33:03.496 23:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:03.496 23:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:03.496 23:14:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:33:03.496 00:33:03.496 real 0m4.378s 00:33:03.496 user 0m5.147s 00:33:03.496 sys 0m0.600s 00:33:03.496 23:14:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.496 23:14:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.496 ************************************ 00:33:03.496 END TEST raid_write_error_test 00:33:03.496 ************************************ 00:33:03.496 23:14:43 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:33:03.496 23:14:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:33:03.496 23:14:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:33:03.496 23:14:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:03.496 23:14:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.496 23:14:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:03.496 ************************************ 00:33:03.496 START TEST raid_state_function_test 00:33:03.496 ************************************ 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63706 00:33:03.496 Process raid pid: 63706 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63706' 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63706 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63706 ']' 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.496 23:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.496 [2024-12-09 23:14:44.050434] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:33:03.496 [2024-12-09 23:14:44.050578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.788 [2024-12-09 23:14:44.228110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.788 [2024-12-09 23:14:44.355122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.044 [2024-12-09 23:14:44.582197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:04.045 [2024-12-09 23:14:44.582253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.302 [2024-12-09 23:14:44.907667] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:04.302 [2024-12-09 23:14:44.907735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:04.302 [2024-12-09 23:14:44.907747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:04.302 [2024-12-09 23:14:44.907760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:04.302 [2024-12-09 23:14:44.907768] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:04.302 [2024-12-09 23:14:44.907780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:04.302 23:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.560 23:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.560 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:04.560 "name": "Existed_Raid", 00:33:04.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.560 "strip_size_kb": 64, 00:33:04.560 "state": "configuring", 00:33:04.560 "raid_level": "raid0", 00:33:04.560 "superblock": false, 00:33:04.560 "num_base_bdevs": 3, 00:33:04.560 "num_base_bdevs_discovered": 0, 00:33:04.560 "num_base_bdevs_operational": 3, 00:33:04.560 "base_bdevs_list": [ 00:33:04.560 { 00:33:04.560 "name": "BaseBdev1", 00:33:04.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.560 "is_configured": false, 00:33:04.560 "data_offset": 0, 00:33:04.560 "data_size": 0 00:33:04.560 }, 00:33:04.560 { 00:33:04.560 "name": "BaseBdev2", 00:33:04.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.560 "is_configured": false, 00:33:04.560 "data_offset": 0, 00:33:04.560 "data_size": 0 00:33:04.560 }, 00:33:04.560 { 00:33:04.560 "name": "BaseBdev3", 00:33:04.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.560 "is_configured": false, 00:33:04.560 "data_offset": 0, 00:33:04.560 "data_size": 0 00:33:04.560 } 00:33:04.560 ] 00:33:04.560 }' 00:33:04.560 23:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:04.560 23:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.818 [2024-12-09 23:14:45.375031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:04.818 [2024-12-09 23:14:45.375080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.818 [2024-12-09 23:14:45.387010] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:04.818 [2024-12-09 23:14:45.387067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:04.818 [2024-12-09 23:14:45.387079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:04.818 [2024-12-09 23:14:45.387093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:04.818 [2024-12-09 23:14:45.387102] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:04.818 [2024-12-09 23:14:45.387115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.818 [2024-12-09 23:14:45.437705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:04.818 BaseBdev1 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.818 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.076 [ 00:33:05.076 { 00:33:05.076 "name": "BaseBdev1", 00:33:05.076 "aliases": [ 00:33:05.076 "4199435b-ae9c-4360-a613-a3fa92fbdd71" 00:33:05.076 ], 00:33:05.077 "product_name": "Malloc disk", 00:33:05.077 "block_size": 512, 00:33:05.077 "num_blocks": 65536, 00:33:05.077 "uuid": "4199435b-ae9c-4360-a613-a3fa92fbdd71", 00:33:05.077 "assigned_rate_limits": { 00:33:05.077 "rw_ios_per_sec": 0, 00:33:05.077 "rw_mbytes_per_sec": 0, 00:33:05.077 "r_mbytes_per_sec": 0, 00:33:05.077 "w_mbytes_per_sec": 0 00:33:05.077 }, 00:33:05.077 "claimed": true, 00:33:05.077 "claim_type": "exclusive_write", 00:33:05.077 "zoned": false, 00:33:05.077 "supported_io_types": { 00:33:05.077 "read": true, 00:33:05.077 "write": true, 00:33:05.077 "unmap": true, 00:33:05.077 "flush": true, 00:33:05.077 "reset": true, 00:33:05.077 "nvme_admin": false, 00:33:05.077 "nvme_io": false, 00:33:05.077 "nvme_io_md": false, 00:33:05.077 "write_zeroes": true, 00:33:05.077 "zcopy": true, 00:33:05.077 "get_zone_info": false, 00:33:05.077 "zone_management": false, 00:33:05.077 "zone_append": false, 00:33:05.077 "compare": false, 00:33:05.077 "compare_and_write": false, 00:33:05.077 "abort": true, 00:33:05.077 "seek_hole": false, 00:33:05.077 "seek_data": false, 00:33:05.077 "copy": true, 00:33:05.077 "nvme_iov_md": false 00:33:05.077 }, 00:33:05.077 "memory_domains": [ 00:33:05.077 { 00:33:05.077 "dma_device_id": "system", 00:33:05.077 "dma_device_type": 1 00:33:05.077 }, 00:33:05.077 { 00:33:05.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.077 "dma_device_type": 2 00:33:05.077 } 00:33:05.077 ], 00:33:05.077 "driver_specific": {} 00:33:05.077 } 00:33:05.077 ] 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:05.077 "name": "Existed_Raid", 00:33:05.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.077 "strip_size_kb": 64, 00:33:05.077 "state": "configuring", 00:33:05.077 "raid_level": "raid0", 00:33:05.077 "superblock": false, 00:33:05.077 "num_base_bdevs": 3, 00:33:05.077 "num_base_bdevs_discovered": 1, 00:33:05.077 "num_base_bdevs_operational": 3, 00:33:05.077 "base_bdevs_list": [ 00:33:05.077 { 00:33:05.077 "name": "BaseBdev1", 00:33:05.077 "uuid": "4199435b-ae9c-4360-a613-a3fa92fbdd71", 00:33:05.077 "is_configured": true, 00:33:05.077 "data_offset": 0, 00:33:05.077 "data_size": 65536 00:33:05.077 }, 00:33:05.077 { 00:33:05.077 "name": "BaseBdev2", 00:33:05.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.077 "is_configured": false, 00:33:05.077 "data_offset": 0, 00:33:05.077 "data_size": 0 00:33:05.077 }, 00:33:05.077 { 00:33:05.077 "name": "BaseBdev3", 00:33:05.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.077 "is_configured": false, 00:33:05.077 "data_offset": 0, 00:33:05.077 "data_size": 0 00:33:05.077 } 00:33:05.077 ] 00:33:05.077 }' 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:05.077 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.335 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:05.335 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.335 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.335 [2024-12-09 23:14:45.901125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:05.335 [2024-12-09 23:14:45.901325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:05.335 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.335 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:05.335 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.335 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.335 [2024-12-09 23:14:45.909171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:05.335 [2024-12-09 23:14:45.911551] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:05.335 [2024-12-09 23:14:45.911705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:05.335 [2024-12-09 23:14:45.911799] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:05.336 [2024-12-09 23:14:45.911846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:05.336 "name": "Existed_Raid", 00:33:05.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.336 "strip_size_kb": 64, 00:33:05.336 "state": "configuring", 00:33:05.336 "raid_level": "raid0", 00:33:05.336 "superblock": false, 00:33:05.336 "num_base_bdevs": 3, 00:33:05.336 "num_base_bdevs_discovered": 1, 00:33:05.336 "num_base_bdevs_operational": 3, 00:33:05.336 "base_bdevs_list": [ 00:33:05.336 { 00:33:05.336 "name": "BaseBdev1", 00:33:05.336 "uuid": "4199435b-ae9c-4360-a613-a3fa92fbdd71", 00:33:05.336 "is_configured": true, 00:33:05.336 "data_offset": 0, 00:33:05.336 "data_size": 65536 00:33:05.336 }, 00:33:05.336 { 00:33:05.336 "name": "BaseBdev2", 00:33:05.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.336 "is_configured": false, 00:33:05.336 "data_offset": 0, 00:33:05.336 "data_size": 0 00:33:05.336 }, 00:33:05.336 { 00:33:05.336 "name": "BaseBdev3", 00:33:05.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.336 "is_configured": false, 00:33:05.336 "data_offset": 0, 00:33:05.336 "data_size": 0 00:33:05.336 } 00:33:05.336 ] 00:33:05.336 }' 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:05.336 23:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.904 [2024-12-09 23:14:46.338434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:05.904 BaseBdev2 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.904 [ 00:33:05.904 { 00:33:05.904 "name": "BaseBdev2", 00:33:05.904 "aliases": [ 00:33:05.904 "b7428524-3d21-48f6-a8b1-1feed7516c28" 00:33:05.904 ], 00:33:05.904 "product_name": "Malloc disk", 00:33:05.904 "block_size": 512, 00:33:05.904 "num_blocks": 65536, 00:33:05.904 "uuid": "b7428524-3d21-48f6-a8b1-1feed7516c28", 00:33:05.904 "assigned_rate_limits": { 00:33:05.904 "rw_ios_per_sec": 0, 00:33:05.904 "rw_mbytes_per_sec": 0, 00:33:05.904 "r_mbytes_per_sec": 0, 00:33:05.904 "w_mbytes_per_sec": 0 00:33:05.904 }, 00:33:05.904 "claimed": true, 00:33:05.904 "claim_type": "exclusive_write", 00:33:05.904 "zoned": false, 00:33:05.904 "supported_io_types": { 00:33:05.904 "read": true, 00:33:05.904 "write": true, 00:33:05.904 "unmap": true, 00:33:05.904 "flush": true, 00:33:05.904 "reset": true, 00:33:05.904 "nvme_admin": false, 00:33:05.904 "nvme_io": false, 00:33:05.904 "nvme_io_md": false, 00:33:05.904 "write_zeroes": true, 00:33:05.904 "zcopy": true, 00:33:05.904 "get_zone_info": false, 00:33:05.904 "zone_management": false, 00:33:05.904 "zone_append": false, 00:33:05.904 "compare": false, 00:33:05.904 "compare_and_write": false, 00:33:05.904 "abort": true, 00:33:05.904 "seek_hole": false, 00:33:05.904 "seek_data": false, 00:33:05.904 "copy": true, 00:33:05.904 "nvme_iov_md": false 00:33:05.904 }, 00:33:05.904 "memory_domains": [ 00:33:05.904 { 00:33:05.904 "dma_device_id": "system", 00:33:05.904 "dma_device_type": 1 00:33:05.904 }, 00:33:05.904 { 00:33:05.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.904 "dma_device_type": 2 00:33:05.904 } 00:33:05.904 ], 00:33:05.904 "driver_specific": {} 00:33:05.904 } 00:33:05.904 ] 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:05.904 "name": "Existed_Raid", 00:33:05.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.904 "strip_size_kb": 64, 00:33:05.904 "state": "configuring", 00:33:05.904 "raid_level": "raid0", 00:33:05.904 "superblock": false, 00:33:05.904 "num_base_bdevs": 3, 00:33:05.904 "num_base_bdevs_discovered": 2, 00:33:05.904 "num_base_bdevs_operational": 3, 00:33:05.904 "base_bdevs_list": [ 00:33:05.904 { 00:33:05.904 "name": "BaseBdev1", 00:33:05.904 "uuid": "4199435b-ae9c-4360-a613-a3fa92fbdd71", 00:33:05.904 "is_configured": true, 00:33:05.904 "data_offset": 0, 00:33:05.904 "data_size": 65536 00:33:05.904 }, 00:33:05.904 { 00:33:05.904 "name": "BaseBdev2", 00:33:05.904 "uuid": "b7428524-3d21-48f6-a8b1-1feed7516c28", 00:33:05.904 "is_configured": true, 00:33:05.904 "data_offset": 0, 00:33:05.904 "data_size": 65536 00:33:05.904 }, 00:33:05.904 { 00:33:05.904 "name": "BaseBdev3", 00:33:05.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.904 "is_configured": false, 00:33:05.904 "data_offset": 0, 00:33:05.904 "data_size": 0 00:33:05.904 } 00:33:05.904 ] 00:33:05.904 }' 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:05.904 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.163 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:06.163 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.163 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.422 [2024-12-09 23:14:46.812720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:06.422 [2024-12-09 23:14:46.812772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:06.422 [2024-12-09 23:14:46.812788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:06.422 [2024-12-09 23:14:46.813191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:06.422 [2024-12-09 23:14:46.813367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:06.422 [2024-12-09 23:14:46.813378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:06.422 [2024-12-09 23:14:46.813681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:06.422 BaseBdev3 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.422 [ 00:33:06.422 { 00:33:06.422 "name": "BaseBdev3", 00:33:06.422 "aliases": [ 00:33:06.422 "547a0875-829c-4f92-994e-691346c24ea8" 00:33:06.422 ], 00:33:06.422 "product_name": "Malloc disk", 00:33:06.422 "block_size": 512, 00:33:06.422 "num_blocks": 65536, 00:33:06.422 "uuid": "547a0875-829c-4f92-994e-691346c24ea8", 00:33:06.422 "assigned_rate_limits": { 00:33:06.422 "rw_ios_per_sec": 0, 00:33:06.422 "rw_mbytes_per_sec": 0, 00:33:06.422 "r_mbytes_per_sec": 0, 00:33:06.422 "w_mbytes_per_sec": 0 00:33:06.422 }, 00:33:06.422 "claimed": true, 00:33:06.422 "claim_type": "exclusive_write", 00:33:06.422 "zoned": false, 00:33:06.422 "supported_io_types": { 00:33:06.422 "read": true, 00:33:06.422 "write": true, 00:33:06.422 "unmap": true, 00:33:06.422 "flush": true, 00:33:06.422 "reset": true, 00:33:06.422 "nvme_admin": false, 00:33:06.422 "nvme_io": false, 00:33:06.422 "nvme_io_md": false, 00:33:06.422 "write_zeroes": true, 00:33:06.422 "zcopy": true, 00:33:06.422 "get_zone_info": false, 00:33:06.422 "zone_management": false, 00:33:06.422 "zone_append": false, 00:33:06.422 "compare": false, 00:33:06.422 "compare_and_write": false, 00:33:06.422 "abort": true, 00:33:06.422 "seek_hole": false, 00:33:06.422 "seek_data": false, 00:33:06.422 "copy": true, 00:33:06.422 "nvme_iov_md": false 00:33:06.422 }, 00:33:06.422 "memory_domains": [ 00:33:06.422 { 00:33:06.422 "dma_device_id": "system", 00:33:06.422 "dma_device_type": 1 00:33:06.422 }, 00:33:06.422 { 00:33:06.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.422 "dma_device_type": 2 00:33:06.422 } 00:33:06.422 ], 00:33:06.422 "driver_specific": {} 00:33:06.422 } 00:33:06.422 ] 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:06.422 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:06.423 "name": "Existed_Raid", 00:33:06.423 "uuid": "0d0169c5-581e-4f38-95af-6cf707be82de", 00:33:06.423 "strip_size_kb": 64, 00:33:06.423 "state": "online", 00:33:06.423 "raid_level": "raid0", 00:33:06.423 "superblock": false, 00:33:06.423 "num_base_bdevs": 3, 00:33:06.423 "num_base_bdevs_discovered": 3, 00:33:06.423 "num_base_bdevs_operational": 3, 00:33:06.423 "base_bdevs_list": [ 00:33:06.423 { 00:33:06.423 "name": "BaseBdev1", 00:33:06.423 "uuid": "4199435b-ae9c-4360-a613-a3fa92fbdd71", 00:33:06.423 "is_configured": true, 00:33:06.423 "data_offset": 0, 00:33:06.423 "data_size": 65536 00:33:06.423 }, 00:33:06.423 { 00:33:06.423 "name": "BaseBdev2", 00:33:06.423 "uuid": "b7428524-3d21-48f6-a8b1-1feed7516c28", 00:33:06.423 "is_configured": true, 00:33:06.423 "data_offset": 0, 00:33:06.423 "data_size": 65536 00:33:06.423 }, 00:33:06.423 { 00:33:06.423 "name": "BaseBdev3", 00:33:06.423 "uuid": "547a0875-829c-4f92-994e-691346c24ea8", 00:33:06.423 "is_configured": true, 00:33:06.423 "data_offset": 0, 00:33:06.423 "data_size": 65536 00:33:06.423 } 00:33:06.423 ] 00:33:06.423 }' 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:06.423 23:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.682 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:06.682 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:06.682 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:06.682 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:06.682 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:06.682 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:06.682 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:06.682 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:06.682 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.682 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.682 [2024-12-09 23:14:47.312377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:06.941 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.941 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:06.941 "name": "Existed_Raid", 00:33:06.941 "aliases": [ 00:33:06.941 "0d0169c5-581e-4f38-95af-6cf707be82de" 00:33:06.941 ], 00:33:06.941 "product_name": "Raid Volume", 00:33:06.941 "block_size": 512, 00:33:06.941 "num_blocks": 196608, 00:33:06.941 "uuid": "0d0169c5-581e-4f38-95af-6cf707be82de", 00:33:06.941 "assigned_rate_limits": { 00:33:06.941 "rw_ios_per_sec": 0, 00:33:06.941 "rw_mbytes_per_sec": 0, 00:33:06.941 "r_mbytes_per_sec": 0, 00:33:06.941 "w_mbytes_per_sec": 0 00:33:06.941 }, 00:33:06.941 "claimed": false, 00:33:06.941 "zoned": false, 00:33:06.941 "supported_io_types": { 00:33:06.941 "read": true, 00:33:06.941 "write": true, 00:33:06.941 "unmap": true, 00:33:06.941 "flush": true, 00:33:06.941 "reset": true, 00:33:06.941 "nvme_admin": false, 00:33:06.941 "nvme_io": false, 00:33:06.941 "nvme_io_md": false, 00:33:06.941 "write_zeroes": true, 00:33:06.941 "zcopy": false, 00:33:06.941 "get_zone_info": false, 00:33:06.941 "zone_management": false, 00:33:06.941 "zone_append": false, 00:33:06.941 "compare": false, 00:33:06.941 "compare_and_write": false, 00:33:06.941 "abort": false, 00:33:06.941 "seek_hole": false, 00:33:06.941 "seek_data": false, 00:33:06.941 "copy": false, 00:33:06.941 "nvme_iov_md": false 00:33:06.941 }, 00:33:06.941 "memory_domains": [ 00:33:06.941 { 00:33:06.941 "dma_device_id": "system", 00:33:06.941 "dma_device_type": 1 00:33:06.941 }, 00:33:06.941 { 00:33:06.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.942 "dma_device_type": 2 00:33:06.942 }, 00:33:06.942 { 00:33:06.942 "dma_device_id": "system", 00:33:06.942 "dma_device_type": 1 00:33:06.942 }, 00:33:06.942 { 00:33:06.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.942 "dma_device_type": 2 00:33:06.942 }, 00:33:06.942 { 00:33:06.942 "dma_device_id": "system", 00:33:06.942 "dma_device_type": 1 00:33:06.942 }, 00:33:06.942 { 00:33:06.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.942 "dma_device_type": 2 00:33:06.942 } 00:33:06.942 ], 00:33:06.942 "driver_specific": { 00:33:06.942 "raid": { 00:33:06.942 "uuid": "0d0169c5-581e-4f38-95af-6cf707be82de", 00:33:06.942 "strip_size_kb": 64, 00:33:06.942 "state": "online", 00:33:06.942 "raid_level": "raid0", 00:33:06.942 "superblock": false, 00:33:06.942 "num_base_bdevs": 3, 00:33:06.942 "num_base_bdevs_discovered": 3, 00:33:06.942 "num_base_bdevs_operational": 3, 00:33:06.942 "base_bdevs_list": [ 00:33:06.942 { 00:33:06.942 "name": "BaseBdev1", 00:33:06.942 "uuid": "4199435b-ae9c-4360-a613-a3fa92fbdd71", 00:33:06.942 "is_configured": true, 00:33:06.942 "data_offset": 0, 00:33:06.942 "data_size": 65536 00:33:06.942 }, 00:33:06.942 { 00:33:06.942 "name": "BaseBdev2", 00:33:06.942 "uuid": "b7428524-3d21-48f6-a8b1-1feed7516c28", 00:33:06.942 "is_configured": true, 00:33:06.942 "data_offset": 0, 00:33:06.942 "data_size": 65536 00:33:06.942 }, 00:33:06.942 { 00:33:06.942 "name": "BaseBdev3", 00:33:06.942 "uuid": "547a0875-829c-4f92-994e-691346c24ea8", 00:33:06.942 "is_configured": true, 00:33:06.942 "data_offset": 0, 00:33:06.942 "data_size": 65536 00:33:06.942 } 00:33:06.942 ] 00:33:06.942 } 00:33:06.942 } 00:33:06.942 }' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:06.942 BaseBdev2 00:33:06.942 BaseBdev3' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.942 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.942 [2024-12-09 23:14:47.551798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:06.942 [2024-12-09 23:14:47.551966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:06.942 [2024-12-09 23:14:47.552119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:07.202 "name": "Existed_Raid", 00:33:07.202 "uuid": "0d0169c5-581e-4f38-95af-6cf707be82de", 00:33:07.202 "strip_size_kb": 64, 00:33:07.202 "state": "offline", 00:33:07.202 "raid_level": "raid0", 00:33:07.202 "superblock": false, 00:33:07.202 "num_base_bdevs": 3, 00:33:07.202 "num_base_bdevs_discovered": 2, 00:33:07.202 "num_base_bdevs_operational": 2, 00:33:07.202 "base_bdevs_list": [ 00:33:07.202 { 00:33:07.202 "name": null, 00:33:07.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.202 "is_configured": false, 00:33:07.202 "data_offset": 0, 00:33:07.202 "data_size": 65536 00:33:07.202 }, 00:33:07.202 { 00:33:07.202 "name": "BaseBdev2", 00:33:07.202 "uuid": "b7428524-3d21-48f6-a8b1-1feed7516c28", 00:33:07.202 "is_configured": true, 00:33:07.202 "data_offset": 0, 00:33:07.202 "data_size": 65536 00:33:07.202 }, 00:33:07.202 { 00:33:07.202 "name": "BaseBdev3", 00:33:07.202 "uuid": "547a0875-829c-4f92-994e-691346c24ea8", 00:33:07.202 "is_configured": true, 00:33:07.202 "data_offset": 0, 00:33:07.202 "data_size": 65536 00:33:07.202 } 00:33:07.202 ] 00:33:07.202 }' 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:07.202 23:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.461 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:07.461 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:07.461 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.461 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:07.461 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.461 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.461 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.719 [2024-12-09 23:14:48.110327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.719 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.719 [2024-12-09 23:14:48.259194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:07.719 [2024-12-09 23:14:48.259402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.979 BaseBdev2 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.979 [ 00:33:07.979 { 00:33:07.979 "name": "BaseBdev2", 00:33:07.979 "aliases": [ 00:33:07.979 "00d0a3e7-a372-41ec-a336-2895d6afd3d7" 00:33:07.979 ], 00:33:07.979 "product_name": "Malloc disk", 00:33:07.979 "block_size": 512, 00:33:07.979 "num_blocks": 65536, 00:33:07.979 "uuid": "00d0a3e7-a372-41ec-a336-2895d6afd3d7", 00:33:07.979 "assigned_rate_limits": { 00:33:07.979 "rw_ios_per_sec": 0, 00:33:07.979 "rw_mbytes_per_sec": 0, 00:33:07.979 "r_mbytes_per_sec": 0, 00:33:07.979 "w_mbytes_per_sec": 0 00:33:07.979 }, 00:33:07.979 "claimed": false, 00:33:07.979 "zoned": false, 00:33:07.979 "supported_io_types": { 00:33:07.979 "read": true, 00:33:07.979 "write": true, 00:33:07.979 "unmap": true, 00:33:07.979 "flush": true, 00:33:07.979 "reset": true, 00:33:07.979 "nvme_admin": false, 00:33:07.979 "nvme_io": false, 00:33:07.979 "nvme_io_md": false, 00:33:07.979 "write_zeroes": true, 00:33:07.979 "zcopy": true, 00:33:07.979 "get_zone_info": false, 00:33:07.979 "zone_management": false, 00:33:07.979 "zone_append": false, 00:33:07.979 "compare": false, 00:33:07.979 "compare_and_write": false, 00:33:07.979 "abort": true, 00:33:07.979 "seek_hole": false, 00:33:07.979 "seek_data": false, 00:33:07.979 "copy": true, 00:33:07.979 "nvme_iov_md": false 00:33:07.979 }, 00:33:07.979 "memory_domains": [ 00:33:07.979 { 00:33:07.979 "dma_device_id": "system", 00:33:07.979 "dma_device_type": 1 00:33:07.979 }, 00:33:07.979 { 00:33:07.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.979 "dma_device_type": 2 00:33:07.979 } 00:33:07.979 ], 00:33:07.979 "driver_specific": {} 00:33:07.979 } 00:33:07.979 ] 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.979 BaseBdev3 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.979 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.979 [ 00:33:07.979 { 00:33:07.980 "name": "BaseBdev3", 00:33:07.980 "aliases": [ 00:33:07.980 "1943afef-2d85-4cc9-9d89-952b5bde42a0" 00:33:07.980 ], 00:33:07.980 "product_name": "Malloc disk", 00:33:07.980 "block_size": 512, 00:33:07.980 "num_blocks": 65536, 00:33:07.980 "uuid": "1943afef-2d85-4cc9-9d89-952b5bde42a0", 00:33:07.980 "assigned_rate_limits": { 00:33:07.980 "rw_ios_per_sec": 0, 00:33:07.980 "rw_mbytes_per_sec": 0, 00:33:07.980 "r_mbytes_per_sec": 0, 00:33:07.980 "w_mbytes_per_sec": 0 00:33:07.980 }, 00:33:07.980 "claimed": false, 00:33:07.980 "zoned": false, 00:33:07.980 "supported_io_types": { 00:33:07.980 "read": true, 00:33:07.980 "write": true, 00:33:07.980 "unmap": true, 00:33:07.980 "flush": true, 00:33:07.980 "reset": true, 00:33:07.980 "nvme_admin": false, 00:33:07.980 "nvme_io": false, 00:33:07.980 "nvme_io_md": false, 00:33:07.980 "write_zeroes": true, 00:33:07.980 "zcopy": true, 00:33:07.980 "get_zone_info": false, 00:33:07.980 "zone_management": false, 00:33:07.980 "zone_append": false, 00:33:07.980 "compare": false, 00:33:07.980 "compare_and_write": false, 00:33:07.980 "abort": true, 00:33:07.980 "seek_hole": false, 00:33:07.980 "seek_data": false, 00:33:07.980 "copy": true, 00:33:07.980 "nvme_iov_md": false 00:33:07.980 }, 00:33:07.980 "memory_domains": [ 00:33:07.980 { 00:33:07.980 "dma_device_id": "system", 00:33:07.980 "dma_device_type": 1 00:33:07.980 }, 00:33:07.980 { 00:33:07.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.980 "dma_device_type": 2 00:33:07.980 } 00:33:07.980 ], 00:33:07.980 "driver_specific": {} 00:33:07.980 } 00:33:07.980 ] 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.980 [2024-12-09 23:14:48.592632] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:07.980 [2024-12-09 23:14:48.592818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:07.980 [2024-12-09 23:14:48.592923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:07.980 [2024-12-09 23:14:48.595259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.980 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:08.239 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.239 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:08.239 "name": "Existed_Raid", 00:33:08.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.239 "strip_size_kb": 64, 00:33:08.239 "state": "configuring", 00:33:08.239 "raid_level": "raid0", 00:33:08.239 "superblock": false, 00:33:08.239 "num_base_bdevs": 3, 00:33:08.239 "num_base_bdevs_discovered": 2, 00:33:08.239 "num_base_bdevs_operational": 3, 00:33:08.239 "base_bdevs_list": [ 00:33:08.239 { 00:33:08.239 "name": "BaseBdev1", 00:33:08.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.239 "is_configured": false, 00:33:08.239 "data_offset": 0, 00:33:08.239 "data_size": 0 00:33:08.239 }, 00:33:08.239 { 00:33:08.239 "name": "BaseBdev2", 00:33:08.239 "uuid": "00d0a3e7-a372-41ec-a336-2895d6afd3d7", 00:33:08.239 "is_configured": true, 00:33:08.239 "data_offset": 0, 00:33:08.239 "data_size": 65536 00:33:08.239 }, 00:33:08.239 { 00:33:08.239 "name": "BaseBdev3", 00:33:08.239 "uuid": "1943afef-2d85-4cc9-9d89-952b5bde42a0", 00:33:08.239 "is_configured": true, 00:33:08.239 "data_offset": 0, 00:33:08.239 "data_size": 65536 00:33:08.239 } 00:33:08.239 ] 00:33:08.239 }' 00:33:08.239 23:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:08.239 23:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.497 [2024-12-09 23:14:49.016083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.497 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:08.497 "name": "Existed_Raid", 00:33:08.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.497 "strip_size_kb": 64, 00:33:08.497 "state": "configuring", 00:33:08.497 "raid_level": "raid0", 00:33:08.497 "superblock": false, 00:33:08.497 "num_base_bdevs": 3, 00:33:08.497 "num_base_bdevs_discovered": 1, 00:33:08.497 "num_base_bdevs_operational": 3, 00:33:08.497 "base_bdevs_list": [ 00:33:08.497 { 00:33:08.497 "name": "BaseBdev1", 00:33:08.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.497 "is_configured": false, 00:33:08.497 "data_offset": 0, 00:33:08.497 "data_size": 0 00:33:08.497 }, 00:33:08.497 { 00:33:08.497 "name": null, 00:33:08.497 "uuid": "00d0a3e7-a372-41ec-a336-2895d6afd3d7", 00:33:08.497 "is_configured": false, 00:33:08.497 "data_offset": 0, 00:33:08.497 "data_size": 65536 00:33:08.498 }, 00:33:08.498 { 00:33:08.498 "name": "BaseBdev3", 00:33:08.498 "uuid": "1943afef-2d85-4cc9-9d89-952b5bde42a0", 00:33:08.498 "is_configured": true, 00:33:08.498 "data_offset": 0, 00:33:08.498 "data_size": 65536 00:33:08.498 } 00:33:08.498 ] 00:33:08.498 }' 00:33:08.498 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:08.498 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.065 [2024-12-09 23:14:49.513471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:09.065 BaseBdev1 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.065 [ 00:33:09.065 { 00:33:09.065 "name": "BaseBdev1", 00:33:09.065 "aliases": [ 00:33:09.065 "e2794c06-dc10-46b4-afb7-f6c5ebfafe89" 00:33:09.065 ], 00:33:09.065 "product_name": "Malloc disk", 00:33:09.065 "block_size": 512, 00:33:09.065 "num_blocks": 65536, 00:33:09.065 "uuid": "e2794c06-dc10-46b4-afb7-f6c5ebfafe89", 00:33:09.065 "assigned_rate_limits": { 00:33:09.065 "rw_ios_per_sec": 0, 00:33:09.065 "rw_mbytes_per_sec": 0, 00:33:09.065 "r_mbytes_per_sec": 0, 00:33:09.065 "w_mbytes_per_sec": 0 00:33:09.065 }, 00:33:09.065 "claimed": true, 00:33:09.065 "claim_type": "exclusive_write", 00:33:09.065 "zoned": false, 00:33:09.065 "supported_io_types": { 00:33:09.065 "read": true, 00:33:09.065 "write": true, 00:33:09.065 "unmap": true, 00:33:09.065 "flush": true, 00:33:09.065 "reset": true, 00:33:09.065 "nvme_admin": false, 00:33:09.065 "nvme_io": false, 00:33:09.065 "nvme_io_md": false, 00:33:09.065 "write_zeroes": true, 00:33:09.065 "zcopy": true, 00:33:09.065 "get_zone_info": false, 00:33:09.065 "zone_management": false, 00:33:09.065 "zone_append": false, 00:33:09.065 "compare": false, 00:33:09.065 "compare_and_write": false, 00:33:09.065 "abort": true, 00:33:09.065 "seek_hole": false, 00:33:09.065 "seek_data": false, 00:33:09.065 "copy": true, 00:33:09.065 "nvme_iov_md": false 00:33:09.065 }, 00:33:09.065 "memory_domains": [ 00:33:09.065 { 00:33:09.065 "dma_device_id": "system", 00:33:09.065 "dma_device_type": 1 00:33:09.065 }, 00:33:09.065 { 00:33:09.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.065 "dma_device_type": 2 00:33:09.065 } 00:33:09.065 ], 00:33:09.065 "driver_specific": {} 00:33:09.065 } 00:33:09.065 ] 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:09.065 "name": "Existed_Raid", 00:33:09.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.065 "strip_size_kb": 64, 00:33:09.065 "state": "configuring", 00:33:09.065 "raid_level": "raid0", 00:33:09.065 "superblock": false, 00:33:09.065 "num_base_bdevs": 3, 00:33:09.065 "num_base_bdevs_discovered": 2, 00:33:09.065 "num_base_bdevs_operational": 3, 00:33:09.065 "base_bdevs_list": [ 00:33:09.065 { 00:33:09.065 "name": "BaseBdev1", 00:33:09.065 "uuid": "e2794c06-dc10-46b4-afb7-f6c5ebfafe89", 00:33:09.065 "is_configured": true, 00:33:09.065 "data_offset": 0, 00:33:09.065 "data_size": 65536 00:33:09.065 }, 00:33:09.065 { 00:33:09.065 "name": null, 00:33:09.065 "uuid": "00d0a3e7-a372-41ec-a336-2895d6afd3d7", 00:33:09.065 "is_configured": false, 00:33:09.065 "data_offset": 0, 00:33:09.065 "data_size": 65536 00:33:09.065 }, 00:33:09.065 { 00:33:09.065 "name": "BaseBdev3", 00:33:09.065 "uuid": "1943afef-2d85-4cc9-9d89-952b5bde42a0", 00:33:09.065 "is_configured": true, 00:33:09.065 "data_offset": 0, 00:33:09.065 "data_size": 65536 00:33:09.065 } 00:33:09.065 ] 00:33:09.065 }' 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:09.065 23:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.652 [2024-12-09 23:14:50.040787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:09.652 "name": "Existed_Raid", 00:33:09.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.652 "strip_size_kb": 64, 00:33:09.652 "state": "configuring", 00:33:09.652 "raid_level": "raid0", 00:33:09.652 "superblock": false, 00:33:09.652 "num_base_bdevs": 3, 00:33:09.652 "num_base_bdevs_discovered": 1, 00:33:09.652 "num_base_bdevs_operational": 3, 00:33:09.652 "base_bdevs_list": [ 00:33:09.652 { 00:33:09.652 "name": "BaseBdev1", 00:33:09.652 "uuid": "e2794c06-dc10-46b4-afb7-f6c5ebfafe89", 00:33:09.652 "is_configured": true, 00:33:09.652 "data_offset": 0, 00:33:09.652 "data_size": 65536 00:33:09.652 }, 00:33:09.652 { 00:33:09.652 "name": null, 00:33:09.652 "uuid": "00d0a3e7-a372-41ec-a336-2895d6afd3d7", 00:33:09.652 "is_configured": false, 00:33:09.652 "data_offset": 0, 00:33:09.652 "data_size": 65536 00:33:09.652 }, 00:33:09.652 { 00:33:09.652 "name": null, 00:33:09.652 "uuid": "1943afef-2d85-4cc9-9d89-952b5bde42a0", 00:33:09.652 "is_configured": false, 00:33:09.652 "data_offset": 0, 00:33:09.652 "data_size": 65536 00:33:09.652 } 00:33:09.652 ] 00:33:09.652 }' 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:09.652 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.912 [2024-12-09 23:14:50.492193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:09.912 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.170 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:10.170 "name": "Existed_Raid", 00:33:10.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.170 "strip_size_kb": 64, 00:33:10.170 "state": "configuring", 00:33:10.170 "raid_level": "raid0", 00:33:10.170 "superblock": false, 00:33:10.170 "num_base_bdevs": 3, 00:33:10.170 "num_base_bdevs_discovered": 2, 00:33:10.170 "num_base_bdevs_operational": 3, 00:33:10.170 "base_bdevs_list": [ 00:33:10.170 { 00:33:10.170 "name": "BaseBdev1", 00:33:10.170 "uuid": "e2794c06-dc10-46b4-afb7-f6c5ebfafe89", 00:33:10.170 "is_configured": true, 00:33:10.170 "data_offset": 0, 00:33:10.170 "data_size": 65536 00:33:10.170 }, 00:33:10.170 { 00:33:10.170 "name": null, 00:33:10.170 "uuid": "00d0a3e7-a372-41ec-a336-2895d6afd3d7", 00:33:10.170 "is_configured": false, 00:33:10.170 "data_offset": 0, 00:33:10.170 "data_size": 65536 00:33:10.170 }, 00:33:10.170 { 00:33:10.170 "name": "BaseBdev3", 00:33:10.170 "uuid": "1943afef-2d85-4cc9-9d89-952b5bde42a0", 00:33:10.170 "is_configured": true, 00:33:10.170 "data_offset": 0, 00:33:10.170 "data_size": 65536 00:33:10.170 } 00:33:10.170 ] 00:33:10.170 }' 00:33:10.170 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:10.170 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.428 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.428 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.428 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.428 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:10.428 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.428 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:10.428 23:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:10.428 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.428 23:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.428 [2024-12-09 23:14:50.935643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.428 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.686 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.686 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:10.686 "name": "Existed_Raid", 00:33:10.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.686 "strip_size_kb": 64, 00:33:10.686 "state": "configuring", 00:33:10.686 "raid_level": "raid0", 00:33:10.686 "superblock": false, 00:33:10.686 "num_base_bdevs": 3, 00:33:10.686 "num_base_bdevs_discovered": 1, 00:33:10.686 "num_base_bdevs_operational": 3, 00:33:10.686 "base_bdevs_list": [ 00:33:10.686 { 00:33:10.686 "name": null, 00:33:10.686 "uuid": "e2794c06-dc10-46b4-afb7-f6c5ebfafe89", 00:33:10.686 "is_configured": false, 00:33:10.686 "data_offset": 0, 00:33:10.686 "data_size": 65536 00:33:10.686 }, 00:33:10.686 { 00:33:10.686 "name": null, 00:33:10.686 "uuid": "00d0a3e7-a372-41ec-a336-2895d6afd3d7", 00:33:10.686 "is_configured": false, 00:33:10.686 "data_offset": 0, 00:33:10.686 "data_size": 65536 00:33:10.686 }, 00:33:10.686 { 00:33:10.686 "name": "BaseBdev3", 00:33:10.686 "uuid": "1943afef-2d85-4cc9-9d89-952b5bde42a0", 00:33:10.686 "is_configured": true, 00:33:10.686 "data_offset": 0, 00:33:10.686 "data_size": 65536 00:33:10.686 } 00:33:10.686 ] 00:33:10.686 }' 00:33:10.686 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:10.686 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.944 [2024-12-09 23:14:51.557437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:10.944 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:10.945 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:10.945 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:10.945 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.945 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.945 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.945 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:11.203 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.203 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:11.203 "name": "Existed_Raid", 00:33:11.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.203 "strip_size_kb": 64, 00:33:11.203 "state": "configuring", 00:33:11.203 "raid_level": "raid0", 00:33:11.203 "superblock": false, 00:33:11.203 "num_base_bdevs": 3, 00:33:11.203 "num_base_bdevs_discovered": 2, 00:33:11.203 "num_base_bdevs_operational": 3, 00:33:11.203 "base_bdevs_list": [ 00:33:11.203 { 00:33:11.203 "name": null, 00:33:11.203 "uuid": "e2794c06-dc10-46b4-afb7-f6c5ebfafe89", 00:33:11.203 "is_configured": false, 00:33:11.203 "data_offset": 0, 00:33:11.203 "data_size": 65536 00:33:11.203 }, 00:33:11.203 { 00:33:11.203 "name": "BaseBdev2", 00:33:11.203 "uuid": "00d0a3e7-a372-41ec-a336-2895d6afd3d7", 00:33:11.203 "is_configured": true, 00:33:11.203 "data_offset": 0, 00:33:11.203 "data_size": 65536 00:33:11.203 }, 00:33:11.203 { 00:33:11.203 "name": "BaseBdev3", 00:33:11.203 "uuid": "1943afef-2d85-4cc9-9d89-952b5bde42a0", 00:33:11.203 "is_configured": true, 00:33:11.203 "data_offset": 0, 00:33:11.203 "data_size": 65536 00:33:11.203 } 00:33:11.203 ] 00:33:11.203 }' 00:33:11.203 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:11.203 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.462 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:11.462 23:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:11.462 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.462 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.462 23:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e2794c06-dc10-46b4-afb7-f6c5ebfafe89 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.462 [2024-12-09 23:14:52.088899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:11.462 [2024-12-09 23:14:52.089159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:11.462 [2024-12-09 23:14:52.089190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:11.462 [2024-12-09 23:14:52.089524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:11.462 [2024-12-09 23:14:52.089706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:11.462 [2024-12-09 23:14:52.089717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:33:11.462 [2024-12-09 23:14:52.089994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:11.462 NewBaseBdev 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.462 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.720 [ 00:33:11.720 { 00:33:11.720 "name": "NewBaseBdev", 00:33:11.720 "aliases": [ 00:33:11.720 "e2794c06-dc10-46b4-afb7-f6c5ebfafe89" 00:33:11.720 ], 00:33:11.720 "product_name": "Malloc disk", 00:33:11.720 "block_size": 512, 00:33:11.720 "num_blocks": 65536, 00:33:11.720 "uuid": "e2794c06-dc10-46b4-afb7-f6c5ebfafe89", 00:33:11.720 "assigned_rate_limits": { 00:33:11.720 "rw_ios_per_sec": 0, 00:33:11.720 "rw_mbytes_per_sec": 0, 00:33:11.720 "r_mbytes_per_sec": 0, 00:33:11.720 "w_mbytes_per_sec": 0 00:33:11.720 }, 00:33:11.720 "claimed": true, 00:33:11.720 "claim_type": "exclusive_write", 00:33:11.720 "zoned": false, 00:33:11.720 "supported_io_types": { 00:33:11.720 "read": true, 00:33:11.720 "write": true, 00:33:11.720 "unmap": true, 00:33:11.720 "flush": true, 00:33:11.720 "reset": true, 00:33:11.720 "nvme_admin": false, 00:33:11.720 "nvme_io": false, 00:33:11.720 "nvme_io_md": false, 00:33:11.720 "write_zeroes": true, 00:33:11.720 "zcopy": true, 00:33:11.720 "get_zone_info": false, 00:33:11.720 "zone_management": false, 00:33:11.720 "zone_append": false, 00:33:11.720 "compare": false, 00:33:11.720 "compare_and_write": false, 00:33:11.720 "abort": true, 00:33:11.720 "seek_hole": false, 00:33:11.720 "seek_data": false, 00:33:11.720 "copy": true, 00:33:11.720 "nvme_iov_md": false 00:33:11.720 }, 00:33:11.720 "memory_domains": [ 00:33:11.720 { 00:33:11.720 "dma_device_id": "system", 00:33:11.720 "dma_device_type": 1 00:33:11.720 }, 00:33:11.720 { 00:33:11.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.720 "dma_device_type": 2 00:33:11.720 } 00:33:11.720 ], 00:33:11.720 "driver_specific": {} 00:33:11.720 } 00:33:11.720 ] 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:11.720 "name": "Existed_Raid", 00:33:11.720 "uuid": "4ad36e28-e7e8-4973-a3f9-49b43da02bbb", 00:33:11.720 "strip_size_kb": 64, 00:33:11.720 "state": "online", 00:33:11.720 "raid_level": "raid0", 00:33:11.720 "superblock": false, 00:33:11.720 "num_base_bdevs": 3, 00:33:11.720 "num_base_bdevs_discovered": 3, 00:33:11.720 "num_base_bdevs_operational": 3, 00:33:11.720 "base_bdevs_list": [ 00:33:11.720 { 00:33:11.720 "name": "NewBaseBdev", 00:33:11.720 "uuid": "e2794c06-dc10-46b4-afb7-f6c5ebfafe89", 00:33:11.720 "is_configured": true, 00:33:11.720 "data_offset": 0, 00:33:11.720 "data_size": 65536 00:33:11.720 }, 00:33:11.720 { 00:33:11.720 "name": "BaseBdev2", 00:33:11.720 "uuid": "00d0a3e7-a372-41ec-a336-2895d6afd3d7", 00:33:11.720 "is_configured": true, 00:33:11.720 "data_offset": 0, 00:33:11.720 "data_size": 65536 00:33:11.720 }, 00:33:11.720 { 00:33:11.720 "name": "BaseBdev3", 00:33:11.720 "uuid": "1943afef-2d85-4cc9-9d89-952b5bde42a0", 00:33:11.720 "is_configured": true, 00:33:11.720 "data_offset": 0, 00:33:11.720 "data_size": 65536 00:33:11.720 } 00:33:11.720 ] 00:33:11.720 }' 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:11.720 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:11.979 [2024-12-09 23:14:52.524704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.979 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:11.979 "name": "Existed_Raid", 00:33:11.979 "aliases": [ 00:33:11.979 "4ad36e28-e7e8-4973-a3f9-49b43da02bbb" 00:33:11.979 ], 00:33:11.979 "product_name": "Raid Volume", 00:33:11.979 "block_size": 512, 00:33:11.979 "num_blocks": 196608, 00:33:11.979 "uuid": "4ad36e28-e7e8-4973-a3f9-49b43da02bbb", 00:33:11.979 "assigned_rate_limits": { 00:33:11.979 "rw_ios_per_sec": 0, 00:33:11.979 "rw_mbytes_per_sec": 0, 00:33:11.979 "r_mbytes_per_sec": 0, 00:33:11.979 "w_mbytes_per_sec": 0 00:33:11.979 }, 00:33:11.979 "claimed": false, 00:33:11.979 "zoned": false, 00:33:11.979 "supported_io_types": { 00:33:11.979 "read": true, 00:33:11.979 "write": true, 00:33:11.979 "unmap": true, 00:33:11.979 "flush": true, 00:33:11.979 "reset": true, 00:33:11.979 "nvme_admin": false, 00:33:11.979 "nvme_io": false, 00:33:11.979 "nvme_io_md": false, 00:33:11.979 "write_zeroes": true, 00:33:11.979 "zcopy": false, 00:33:11.979 "get_zone_info": false, 00:33:11.979 "zone_management": false, 00:33:11.979 "zone_append": false, 00:33:11.979 "compare": false, 00:33:11.979 "compare_and_write": false, 00:33:11.979 "abort": false, 00:33:11.979 "seek_hole": false, 00:33:11.979 "seek_data": false, 00:33:11.979 "copy": false, 00:33:11.979 "nvme_iov_md": false 00:33:11.979 }, 00:33:11.979 "memory_domains": [ 00:33:11.979 { 00:33:11.979 "dma_device_id": "system", 00:33:11.979 "dma_device_type": 1 00:33:11.979 }, 00:33:11.979 { 00:33:11.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.979 "dma_device_type": 2 00:33:11.979 }, 00:33:11.979 { 00:33:11.979 "dma_device_id": "system", 00:33:11.979 "dma_device_type": 1 00:33:11.979 }, 00:33:11.979 { 00:33:11.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.979 "dma_device_type": 2 00:33:11.979 }, 00:33:11.980 { 00:33:11.980 "dma_device_id": "system", 00:33:11.980 "dma_device_type": 1 00:33:11.980 }, 00:33:11.980 { 00:33:11.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.980 "dma_device_type": 2 00:33:11.980 } 00:33:11.980 ], 00:33:11.980 "driver_specific": { 00:33:11.980 "raid": { 00:33:11.980 "uuid": "4ad36e28-e7e8-4973-a3f9-49b43da02bbb", 00:33:11.980 "strip_size_kb": 64, 00:33:11.980 "state": "online", 00:33:11.980 "raid_level": "raid0", 00:33:11.980 "superblock": false, 00:33:11.980 "num_base_bdevs": 3, 00:33:11.980 "num_base_bdevs_discovered": 3, 00:33:11.980 "num_base_bdevs_operational": 3, 00:33:11.980 "base_bdevs_list": [ 00:33:11.980 { 00:33:11.980 "name": "NewBaseBdev", 00:33:11.980 "uuid": "e2794c06-dc10-46b4-afb7-f6c5ebfafe89", 00:33:11.980 "is_configured": true, 00:33:11.980 "data_offset": 0, 00:33:11.980 "data_size": 65536 00:33:11.980 }, 00:33:11.980 { 00:33:11.980 "name": "BaseBdev2", 00:33:11.980 "uuid": "00d0a3e7-a372-41ec-a336-2895d6afd3d7", 00:33:11.980 "is_configured": true, 00:33:11.980 "data_offset": 0, 00:33:11.980 "data_size": 65536 00:33:11.980 }, 00:33:11.980 { 00:33:11.980 "name": "BaseBdev3", 00:33:11.980 "uuid": "1943afef-2d85-4cc9-9d89-952b5bde42a0", 00:33:11.980 "is_configured": true, 00:33:11.980 "data_offset": 0, 00:33:11.980 "data_size": 65536 00:33:11.980 } 00:33:11.980 ] 00:33:11.980 } 00:33:11.980 } 00:33:11.980 }' 00:33:11.980 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:11.980 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:11.980 BaseBdev2 00:33:11.980 BaseBdev3' 00:33:11.980 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:12.239 [2024-12-09 23:14:52.748102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:12.239 [2024-12-09 23:14:52.748267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:12.239 [2024-12-09 23:14:52.748445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:12.239 [2024-12-09 23:14:52.748541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:12.239 [2024-12-09 23:14:52.748744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63706 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63706 ']' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63706 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63706 00:33:12.239 killing process with pid 63706 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63706' 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63706 00:33:12.239 [2024-12-09 23:14:52.797387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:12.239 23:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63706 00:33:12.498 [2024-12-09 23:14:53.126671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:33:13.869 00:33:13.869 real 0m10.426s 00:33:13.869 user 0m16.458s 00:33:13.869 sys 0m1.987s 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:13.869 ************************************ 00:33:13.869 END TEST raid_state_function_test 00:33:13.869 ************************************ 00:33:13.869 23:14:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:33:13.869 23:14:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:13.869 23:14:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:13.869 23:14:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:13.869 ************************************ 00:33:13.869 START TEST raid_state_function_test_sb 00:33:13.869 ************************************ 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64327 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64327' 00:33:13.869 Process raid pid: 64327 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64327 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64327 ']' 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.869 23:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.128 [2024-12-09 23:14:54.554522] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:33:14.128 [2024-12-09 23:14:54.554917] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.128 [2024-12-09 23:14:54.760675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.386 [2024-12-09 23:14:54.891847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.646 [2024-12-09 23:14:55.121097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:14.646 [2024-12-09 23:14:55.121154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.904 [2024-12-09 23:14:55.417992] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:14.904 [2024-12-09 23:14:55.418057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:14.904 [2024-12-09 23:14:55.418071] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:14.904 [2024-12-09 23:14:55.418103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:14.904 [2024-12-09 23:14:55.418111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:14.904 [2024-12-09 23:14:55.418133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:14.904 "name": "Existed_Raid", 00:33:14.904 "uuid": "819565d3-e3c3-4769-9536-2c057a254fdf", 00:33:14.904 "strip_size_kb": 64, 00:33:14.904 "state": "configuring", 00:33:14.904 "raid_level": "raid0", 00:33:14.904 "superblock": true, 00:33:14.904 "num_base_bdevs": 3, 00:33:14.904 "num_base_bdevs_discovered": 0, 00:33:14.904 "num_base_bdevs_operational": 3, 00:33:14.904 "base_bdevs_list": [ 00:33:14.904 { 00:33:14.904 "name": "BaseBdev1", 00:33:14.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.904 "is_configured": false, 00:33:14.904 "data_offset": 0, 00:33:14.904 "data_size": 0 00:33:14.904 }, 00:33:14.904 { 00:33:14.904 "name": "BaseBdev2", 00:33:14.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.904 "is_configured": false, 00:33:14.904 "data_offset": 0, 00:33:14.904 "data_size": 0 00:33:14.904 }, 00:33:14.904 { 00:33:14.904 "name": "BaseBdev3", 00:33:14.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.904 "is_configured": false, 00:33:14.904 "data_offset": 0, 00:33:14.904 "data_size": 0 00:33:14.904 } 00:33:14.904 ] 00:33:14.904 }' 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:14.904 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.162 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:15.162 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.162 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.162 [2024-12-09 23:14:55.777471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:15.162 [2024-12-09 23:14:55.777669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:15.162 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.162 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:15.162 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.162 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.162 [2024-12-09 23:14:55.789458] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:15.162 [2024-12-09 23:14:55.789511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:15.162 [2024-12-09 23:14:55.789522] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:15.162 [2024-12-09 23:14:55.789537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:15.162 [2024-12-09 23:14:55.789545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:15.162 [2024-12-09 23:14:55.789558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:15.162 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.162 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:15.163 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.163 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.421 [2024-12-09 23:14:55.842297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:15.421 BaseBdev1 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.421 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.421 [ 00:33:15.421 { 00:33:15.421 "name": "BaseBdev1", 00:33:15.421 "aliases": [ 00:33:15.421 "8e45f963-ed93-43c5-8b4d-614e408aae05" 00:33:15.421 ], 00:33:15.421 "product_name": "Malloc disk", 00:33:15.421 "block_size": 512, 00:33:15.421 "num_blocks": 65536, 00:33:15.421 "uuid": "8e45f963-ed93-43c5-8b4d-614e408aae05", 00:33:15.421 "assigned_rate_limits": { 00:33:15.421 "rw_ios_per_sec": 0, 00:33:15.421 "rw_mbytes_per_sec": 0, 00:33:15.421 "r_mbytes_per_sec": 0, 00:33:15.422 "w_mbytes_per_sec": 0 00:33:15.422 }, 00:33:15.422 "claimed": true, 00:33:15.422 "claim_type": "exclusive_write", 00:33:15.422 "zoned": false, 00:33:15.422 "supported_io_types": { 00:33:15.422 "read": true, 00:33:15.422 "write": true, 00:33:15.422 "unmap": true, 00:33:15.422 "flush": true, 00:33:15.422 "reset": true, 00:33:15.422 "nvme_admin": false, 00:33:15.422 "nvme_io": false, 00:33:15.422 "nvme_io_md": false, 00:33:15.422 "write_zeroes": true, 00:33:15.422 "zcopy": true, 00:33:15.422 "get_zone_info": false, 00:33:15.422 "zone_management": false, 00:33:15.422 "zone_append": false, 00:33:15.422 "compare": false, 00:33:15.422 "compare_and_write": false, 00:33:15.422 "abort": true, 00:33:15.422 "seek_hole": false, 00:33:15.422 "seek_data": false, 00:33:15.422 "copy": true, 00:33:15.422 "nvme_iov_md": false 00:33:15.422 }, 00:33:15.422 "memory_domains": [ 00:33:15.422 { 00:33:15.422 "dma_device_id": "system", 00:33:15.422 "dma_device_type": 1 00:33:15.422 }, 00:33:15.422 { 00:33:15.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.422 "dma_device_type": 2 00:33:15.422 } 00:33:15.422 ], 00:33:15.422 "driver_specific": {} 00:33:15.422 } 00:33:15.422 ] 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:15.422 "name": "Existed_Raid", 00:33:15.422 "uuid": "57c6f8ed-48af-40ef-b067-85d280296f76", 00:33:15.422 "strip_size_kb": 64, 00:33:15.422 "state": "configuring", 00:33:15.422 "raid_level": "raid0", 00:33:15.422 "superblock": true, 00:33:15.422 "num_base_bdevs": 3, 00:33:15.422 "num_base_bdevs_discovered": 1, 00:33:15.422 "num_base_bdevs_operational": 3, 00:33:15.422 "base_bdevs_list": [ 00:33:15.422 { 00:33:15.422 "name": "BaseBdev1", 00:33:15.422 "uuid": "8e45f963-ed93-43c5-8b4d-614e408aae05", 00:33:15.422 "is_configured": true, 00:33:15.422 "data_offset": 2048, 00:33:15.422 "data_size": 63488 00:33:15.422 }, 00:33:15.422 { 00:33:15.422 "name": "BaseBdev2", 00:33:15.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:15.422 "is_configured": false, 00:33:15.422 "data_offset": 0, 00:33:15.422 "data_size": 0 00:33:15.422 }, 00:33:15.422 { 00:33:15.422 "name": "BaseBdev3", 00:33:15.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:15.422 "is_configured": false, 00:33:15.422 "data_offset": 0, 00:33:15.422 "data_size": 0 00:33:15.422 } 00:33:15.422 ] 00:33:15.422 }' 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:15.422 23:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.990 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:15.990 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.990 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.990 [2024-12-09 23:14:56.342298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:15.991 [2024-12-09 23:14:56.342364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.991 [2024-12-09 23:14:56.354437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:15.991 [2024-12-09 23:14:56.356917] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:15.991 [2024-12-09 23:14:56.357107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:15.991 [2024-12-09 23:14:56.357205] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:15.991 [2024-12-09 23:14:56.357257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:15.991 "name": "Existed_Raid", 00:33:15.991 "uuid": "67927b93-eb2e-480f-9d35-d277347ebe26", 00:33:15.991 "strip_size_kb": 64, 00:33:15.991 "state": "configuring", 00:33:15.991 "raid_level": "raid0", 00:33:15.991 "superblock": true, 00:33:15.991 "num_base_bdevs": 3, 00:33:15.991 "num_base_bdevs_discovered": 1, 00:33:15.991 "num_base_bdevs_operational": 3, 00:33:15.991 "base_bdevs_list": [ 00:33:15.991 { 00:33:15.991 "name": "BaseBdev1", 00:33:15.991 "uuid": "8e45f963-ed93-43c5-8b4d-614e408aae05", 00:33:15.991 "is_configured": true, 00:33:15.991 "data_offset": 2048, 00:33:15.991 "data_size": 63488 00:33:15.991 }, 00:33:15.991 { 00:33:15.991 "name": "BaseBdev2", 00:33:15.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:15.991 "is_configured": false, 00:33:15.991 "data_offset": 0, 00:33:15.991 "data_size": 0 00:33:15.991 }, 00:33:15.991 { 00:33:15.991 "name": "BaseBdev3", 00:33:15.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:15.991 "is_configured": false, 00:33:15.991 "data_offset": 0, 00:33:15.991 "data_size": 0 00:33:15.991 } 00:33:15.991 ] 00:33:15.991 }' 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:15.991 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.255 [2024-12-09 23:14:56.829929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:16.255 BaseBdev2 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:16.255 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.256 [ 00:33:16.256 { 00:33:16.256 "name": "BaseBdev2", 00:33:16.256 "aliases": [ 00:33:16.256 "d9787515-e8ea-4201-8996-fe9c39495081" 00:33:16.256 ], 00:33:16.256 "product_name": "Malloc disk", 00:33:16.256 "block_size": 512, 00:33:16.256 "num_blocks": 65536, 00:33:16.256 "uuid": "d9787515-e8ea-4201-8996-fe9c39495081", 00:33:16.256 "assigned_rate_limits": { 00:33:16.256 "rw_ios_per_sec": 0, 00:33:16.256 "rw_mbytes_per_sec": 0, 00:33:16.256 "r_mbytes_per_sec": 0, 00:33:16.256 "w_mbytes_per_sec": 0 00:33:16.256 }, 00:33:16.256 "claimed": true, 00:33:16.256 "claim_type": "exclusive_write", 00:33:16.256 "zoned": false, 00:33:16.256 "supported_io_types": { 00:33:16.256 "read": true, 00:33:16.256 "write": true, 00:33:16.256 "unmap": true, 00:33:16.256 "flush": true, 00:33:16.256 "reset": true, 00:33:16.256 "nvme_admin": false, 00:33:16.256 "nvme_io": false, 00:33:16.256 "nvme_io_md": false, 00:33:16.256 "write_zeroes": true, 00:33:16.256 "zcopy": true, 00:33:16.256 "get_zone_info": false, 00:33:16.256 "zone_management": false, 00:33:16.256 "zone_append": false, 00:33:16.256 "compare": false, 00:33:16.256 "compare_and_write": false, 00:33:16.256 "abort": true, 00:33:16.256 "seek_hole": false, 00:33:16.256 "seek_data": false, 00:33:16.256 "copy": true, 00:33:16.256 "nvme_iov_md": false 00:33:16.256 }, 00:33:16.256 "memory_domains": [ 00:33:16.256 { 00:33:16.256 "dma_device_id": "system", 00:33:16.256 "dma_device_type": 1 00:33:16.256 }, 00:33:16.256 { 00:33:16.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:16.256 "dma_device_type": 2 00:33:16.256 } 00:33:16.256 ], 00:33:16.256 "driver_specific": {} 00:33:16.256 } 00:33:16.256 ] 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.256 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.575 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.575 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.575 "name": "Existed_Raid", 00:33:16.575 "uuid": "67927b93-eb2e-480f-9d35-d277347ebe26", 00:33:16.575 "strip_size_kb": 64, 00:33:16.575 "state": "configuring", 00:33:16.575 "raid_level": "raid0", 00:33:16.575 "superblock": true, 00:33:16.575 "num_base_bdevs": 3, 00:33:16.575 "num_base_bdevs_discovered": 2, 00:33:16.575 "num_base_bdevs_operational": 3, 00:33:16.575 "base_bdevs_list": [ 00:33:16.575 { 00:33:16.575 "name": "BaseBdev1", 00:33:16.575 "uuid": "8e45f963-ed93-43c5-8b4d-614e408aae05", 00:33:16.575 "is_configured": true, 00:33:16.575 "data_offset": 2048, 00:33:16.575 "data_size": 63488 00:33:16.575 }, 00:33:16.575 { 00:33:16.575 "name": "BaseBdev2", 00:33:16.575 "uuid": "d9787515-e8ea-4201-8996-fe9c39495081", 00:33:16.575 "is_configured": true, 00:33:16.575 "data_offset": 2048, 00:33:16.575 "data_size": 63488 00:33:16.575 }, 00:33:16.575 { 00:33:16.575 "name": "BaseBdev3", 00:33:16.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.575 "is_configured": false, 00:33:16.575 "data_offset": 0, 00:33:16.575 "data_size": 0 00:33:16.575 } 00:33:16.575 ] 00:33:16.575 }' 00:33:16.575 23:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.575 23:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.835 [2024-12-09 23:14:57.365348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:16.835 [2024-12-09 23:14:57.365676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:16.835 [2024-12-09 23:14:57.365704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:16.835 [2024-12-09 23:14:57.366003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:16.835 [2024-12-09 23:14:57.366174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:16.835 [2024-12-09 23:14:57.366192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:16.835 [2024-12-09 23:14:57.366351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:16.835 BaseBdev3 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.835 [ 00:33:16.835 { 00:33:16.835 "name": "BaseBdev3", 00:33:16.835 "aliases": [ 00:33:16.835 "9859d456-efb0-4246-add6-8b549ed67ab6" 00:33:16.835 ], 00:33:16.835 "product_name": "Malloc disk", 00:33:16.835 "block_size": 512, 00:33:16.835 "num_blocks": 65536, 00:33:16.835 "uuid": "9859d456-efb0-4246-add6-8b549ed67ab6", 00:33:16.835 "assigned_rate_limits": { 00:33:16.835 "rw_ios_per_sec": 0, 00:33:16.835 "rw_mbytes_per_sec": 0, 00:33:16.835 "r_mbytes_per_sec": 0, 00:33:16.835 "w_mbytes_per_sec": 0 00:33:16.835 }, 00:33:16.835 "claimed": true, 00:33:16.835 "claim_type": "exclusive_write", 00:33:16.835 "zoned": false, 00:33:16.835 "supported_io_types": { 00:33:16.835 "read": true, 00:33:16.835 "write": true, 00:33:16.835 "unmap": true, 00:33:16.835 "flush": true, 00:33:16.835 "reset": true, 00:33:16.835 "nvme_admin": false, 00:33:16.835 "nvme_io": false, 00:33:16.835 "nvme_io_md": false, 00:33:16.835 "write_zeroes": true, 00:33:16.835 "zcopy": true, 00:33:16.835 "get_zone_info": false, 00:33:16.835 "zone_management": false, 00:33:16.835 "zone_append": false, 00:33:16.835 "compare": false, 00:33:16.835 "compare_and_write": false, 00:33:16.835 "abort": true, 00:33:16.835 "seek_hole": false, 00:33:16.835 "seek_data": false, 00:33:16.835 "copy": true, 00:33:16.835 "nvme_iov_md": false 00:33:16.835 }, 00:33:16.835 "memory_domains": [ 00:33:16.835 { 00:33:16.835 "dma_device_id": "system", 00:33:16.835 "dma_device_type": 1 00:33:16.835 }, 00:33:16.835 { 00:33:16.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:16.835 "dma_device_type": 2 00:33:16.835 } 00:33:16.835 ], 00:33:16.835 "driver_specific": {} 00:33:16.835 } 00:33:16.835 ] 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.835 "name": "Existed_Raid", 00:33:16.835 "uuid": "67927b93-eb2e-480f-9d35-d277347ebe26", 00:33:16.835 "strip_size_kb": 64, 00:33:16.835 "state": "online", 00:33:16.835 "raid_level": "raid0", 00:33:16.835 "superblock": true, 00:33:16.835 "num_base_bdevs": 3, 00:33:16.835 "num_base_bdevs_discovered": 3, 00:33:16.835 "num_base_bdevs_operational": 3, 00:33:16.835 "base_bdevs_list": [ 00:33:16.835 { 00:33:16.835 "name": "BaseBdev1", 00:33:16.835 "uuid": "8e45f963-ed93-43c5-8b4d-614e408aae05", 00:33:16.835 "is_configured": true, 00:33:16.835 "data_offset": 2048, 00:33:16.835 "data_size": 63488 00:33:16.835 }, 00:33:16.835 { 00:33:16.835 "name": "BaseBdev2", 00:33:16.835 "uuid": "d9787515-e8ea-4201-8996-fe9c39495081", 00:33:16.835 "is_configured": true, 00:33:16.835 "data_offset": 2048, 00:33:16.835 "data_size": 63488 00:33:16.835 }, 00:33:16.835 { 00:33:16.835 "name": "BaseBdev3", 00:33:16.835 "uuid": "9859d456-efb0-4246-add6-8b549ed67ab6", 00:33:16.835 "is_configured": true, 00:33:16.835 "data_offset": 2048, 00:33:16.835 "data_size": 63488 00:33:16.835 } 00:33:16.835 ] 00:33:16.835 }' 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.835 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:17.403 [2024-12-09 23:14:57.817087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.403 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:17.403 "name": "Existed_Raid", 00:33:17.403 "aliases": [ 00:33:17.403 "67927b93-eb2e-480f-9d35-d277347ebe26" 00:33:17.403 ], 00:33:17.403 "product_name": "Raid Volume", 00:33:17.403 "block_size": 512, 00:33:17.403 "num_blocks": 190464, 00:33:17.403 "uuid": "67927b93-eb2e-480f-9d35-d277347ebe26", 00:33:17.403 "assigned_rate_limits": { 00:33:17.403 "rw_ios_per_sec": 0, 00:33:17.403 "rw_mbytes_per_sec": 0, 00:33:17.403 "r_mbytes_per_sec": 0, 00:33:17.404 "w_mbytes_per_sec": 0 00:33:17.404 }, 00:33:17.404 "claimed": false, 00:33:17.404 "zoned": false, 00:33:17.404 "supported_io_types": { 00:33:17.404 "read": true, 00:33:17.404 "write": true, 00:33:17.404 "unmap": true, 00:33:17.404 "flush": true, 00:33:17.404 "reset": true, 00:33:17.404 "nvme_admin": false, 00:33:17.404 "nvme_io": false, 00:33:17.404 "nvme_io_md": false, 00:33:17.404 "write_zeroes": true, 00:33:17.404 "zcopy": false, 00:33:17.404 "get_zone_info": false, 00:33:17.404 "zone_management": false, 00:33:17.404 "zone_append": false, 00:33:17.404 "compare": false, 00:33:17.404 "compare_and_write": false, 00:33:17.404 "abort": false, 00:33:17.404 "seek_hole": false, 00:33:17.404 "seek_data": false, 00:33:17.404 "copy": false, 00:33:17.404 "nvme_iov_md": false 00:33:17.404 }, 00:33:17.404 "memory_domains": [ 00:33:17.404 { 00:33:17.404 "dma_device_id": "system", 00:33:17.404 "dma_device_type": 1 00:33:17.404 }, 00:33:17.404 { 00:33:17.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.404 "dma_device_type": 2 00:33:17.404 }, 00:33:17.404 { 00:33:17.404 "dma_device_id": "system", 00:33:17.404 "dma_device_type": 1 00:33:17.404 }, 00:33:17.404 { 00:33:17.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.404 "dma_device_type": 2 00:33:17.404 }, 00:33:17.404 { 00:33:17.404 "dma_device_id": "system", 00:33:17.404 "dma_device_type": 1 00:33:17.404 }, 00:33:17.404 { 00:33:17.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.404 "dma_device_type": 2 00:33:17.404 } 00:33:17.404 ], 00:33:17.404 "driver_specific": { 00:33:17.404 "raid": { 00:33:17.404 "uuid": "67927b93-eb2e-480f-9d35-d277347ebe26", 00:33:17.404 "strip_size_kb": 64, 00:33:17.404 "state": "online", 00:33:17.404 "raid_level": "raid0", 00:33:17.404 "superblock": true, 00:33:17.404 "num_base_bdevs": 3, 00:33:17.404 "num_base_bdevs_discovered": 3, 00:33:17.404 "num_base_bdevs_operational": 3, 00:33:17.404 "base_bdevs_list": [ 00:33:17.404 { 00:33:17.404 "name": "BaseBdev1", 00:33:17.404 "uuid": "8e45f963-ed93-43c5-8b4d-614e408aae05", 00:33:17.404 "is_configured": true, 00:33:17.404 "data_offset": 2048, 00:33:17.404 "data_size": 63488 00:33:17.404 }, 00:33:17.404 { 00:33:17.404 "name": "BaseBdev2", 00:33:17.404 "uuid": "d9787515-e8ea-4201-8996-fe9c39495081", 00:33:17.404 "is_configured": true, 00:33:17.404 "data_offset": 2048, 00:33:17.404 "data_size": 63488 00:33:17.404 }, 00:33:17.404 { 00:33:17.404 "name": "BaseBdev3", 00:33:17.404 "uuid": "9859d456-efb0-4246-add6-8b549ed67ab6", 00:33:17.404 "is_configured": true, 00:33:17.404 "data_offset": 2048, 00:33:17.404 "data_size": 63488 00:33:17.404 } 00:33:17.404 ] 00:33:17.404 } 00:33:17.404 } 00:33:17.404 }' 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:17.404 BaseBdev2 00:33:17.404 BaseBdev3' 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:17.404 23:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.404 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:17.404 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:17.404 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:17.404 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:17.404 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.404 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.404 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.664 [2024-12-09 23:14:58.084477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:17.664 [2024-12-09 23:14:58.084510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:17.664 [2024-12-09 23:14:58.084571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.664 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:17.664 "name": "Existed_Raid", 00:33:17.664 "uuid": "67927b93-eb2e-480f-9d35-d277347ebe26", 00:33:17.664 "strip_size_kb": 64, 00:33:17.664 "state": "offline", 00:33:17.664 "raid_level": "raid0", 00:33:17.664 "superblock": true, 00:33:17.664 "num_base_bdevs": 3, 00:33:17.664 "num_base_bdevs_discovered": 2, 00:33:17.664 "num_base_bdevs_operational": 2, 00:33:17.664 "base_bdevs_list": [ 00:33:17.664 { 00:33:17.664 "name": null, 00:33:17.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.664 "is_configured": false, 00:33:17.664 "data_offset": 0, 00:33:17.664 "data_size": 63488 00:33:17.664 }, 00:33:17.664 { 00:33:17.664 "name": "BaseBdev2", 00:33:17.664 "uuid": "d9787515-e8ea-4201-8996-fe9c39495081", 00:33:17.664 "is_configured": true, 00:33:17.664 "data_offset": 2048, 00:33:17.664 "data_size": 63488 00:33:17.664 }, 00:33:17.664 { 00:33:17.664 "name": "BaseBdev3", 00:33:17.665 "uuid": "9859d456-efb0-4246-add6-8b549ed67ab6", 00:33:17.665 "is_configured": true, 00:33:17.665 "data_offset": 2048, 00:33:17.665 "data_size": 63488 00:33:17.665 } 00:33:17.665 ] 00:33:17.665 }' 00:33:17.665 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:17.665 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.232 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:18.232 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:18.232 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.232 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:18.232 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.232 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.233 [2024-12-09 23:14:58.664661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.233 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.233 [2024-12-09 23:14:58.820324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:18.233 [2024-12-09 23:14:58.820383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.493 23:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.493 BaseBdev2 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.493 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.493 [ 00:33:18.493 { 00:33:18.493 "name": "BaseBdev2", 00:33:18.493 "aliases": [ 00:33:18.493 "9192d85a-5cbd-4f00-8bb0-0494c2ad5107" 00:33:18.493 ], 00:33:18.493 "product_name": "Malloc disk", 00:33:18.493 "block_size": 512, 00:33:18.493 "num_blocks": 65536, 00:33:18.493 "uuid": "9192d85a-5cbd-4f00-8bb0-0494c2ad5107", 00:33:18.493 "assigned_rate_limits": { 00:33:18.493 "rw_ios_per_sec": 0, 00:33:18.493 "rw_mbytes_per_sec": 0, 00:33:18.493 "r_mbytes_per_sec": 0, 00:33:18.493 "w_mbytes_per_sec": 0 00:33:18.493 }, 00:33:18.494 "claimed": false, 00:33:18.494 "zoned": false, 00:33:18.494 "supported_io_types": { 00:33:18.494 "read": true, 00:33:18.494 "write": true, 00:33:18.494 "unmap": true, 00:33:18.494 "flush": true, 00:33:18.494 "reset": true, 00:33:18.494 "nvme_admin": false, 00:33:18.494 "nvme_io": false, 00:33:18.494 "nvme_io_md": false, 00:33:18.494 "write_zeroes": true, 00:33:18.494 "zcopy": true, 00:33:18.494 "get_zone_info": false, 00:33:18.494 "zone_management": false, 00:33:18.494 "zone_append": false, 00:33:18.494 "compare": false, 00:33:18.494 "compare_and_write": false, 00:33:18.494 "abort": true, 00:33:18.494 "seek_hole": false, 00:33:18.494 "seek_data": false, 00:33:18.494 "copy": true, 00:33:18.494 "nvme_iov_md": false 00:33:18.494 }, 00:33:18.494 "memory_domains": [ 00:33:18.494 { 00:33:18.494 "dma_device_id": "system", 00:33:18.494 "dma_device_type": 1 00:33:18.494 }, 00:33:18.494 { 00:33:18.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:18.494 "dma_device_type": 2 00:33:18.494 } 00:33:18.494 ], 00:33:18.494 "driver_specific": {} 00:33:18.494 } 00:33:18.494 ] 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.494 BaseBdev3 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.494 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.753 [ 00:33:18.753 { 00:33:18.753 "name": "BaseBdev3", 00:33:18.753 "aliases": [ 00:33:18.753 "c400aa43-c01a-4c2b-a0fa-871ae54ccd01" 00:33:18.753 ], 00:33:18.753 "product_name": "Malloc disk", 00:33:18.753 "block_size": 512, 00:33:18.753 "num_blocks": 65536, 00:33:18.753 "uuid": "c400aa43-c01a-4c2b-a0fa-871ae54ccd01", 00:33:18.753 "assigned_rate_limits": { 00:33:18.753 "rw_ios_per_sec": 0, 00:33:18.753 "rw_mbytes_per_sec": 0, 00:33:18.753 "r_mbytes_per_sec": 0, 00:33:18.753 "w_mbytes_per_sec": 0 00:33:18.753 }, 00:33:18.753 "claimed": false, 00:33:18.753 "zoned": false, 00:33:18.753 "supported_io_types": { 00:33:18.753 "read": true, 00:33:18.753 "write": true, 00:33:18.753 "unmap": true, 00:33:18.753 "flush": true, 00:33:18.753 "reset": true, 00:33:18.753 "nvme_admin": false, 00:33:18.753 "nvme_io": false, 00:33:18.753 "nvme_io_md": false, 00:33:18.753 "write_zeroes": true, 00:33:18.753 "zcopy": true, 00:33:18.753 "get_zone_info": false, 00:33:18.753 "zone_management": false, 00:33:18.753 "zone_append": false, 00:33:18.753 "compare": false, 00:33:18.753 "compare_and_write": false, 00:33:18.753 "abort": true, 00:33:18.753 "seek_hole": false, 00:33:18.753 "seek_data": false, 00:33:18.753 "copy": true, 00:33:18.753 "nvme_iov_md": false 00:33:18.753 }, 00:33:18.753 "memory_domains": [ 00:33:18.753 { 00:33:18.753 "dma_device_id": "system", 00:33:18.753 "dma_device_type": 1 00:33:18.753 }, 00:33:18.753 { 00:33:18.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:18.753 "dma_device_type": 2 00:33:18.753 } 00:33:18.753 ], 00:33:18.753 "driver_specific": {} 00:33:18.753 } 00:33:18.753 ] 00:33:18.753 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.754 [2024-12-09 23:14:59.163972] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:18.754 [2024-12-09 23:14:59.164140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:18.754 [2024-12-09 23:14:59.164185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:18.754 [2024-12-09 23:14:59.166938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:18.754 "name": "Existed_Raid", 00:33:18.754 "uuid": "b5a42542-5017-44a2-bf2d-97add1483db6", 00:33:18.754 "strip_size_kb": 64, 00:33:18.754 "state": "configuring", 00:33:18.754 "raid_level": "raid0", 00:33:18.754 "superblock": true, 00:33:18.754 "num_base_bdevs": 3, 00:33:18.754 "num_base_bdevs_discovered": 2, 00:33:18.754 "num_base_bdevs_operational": 3, 00:33:18.754 "base_bdevs_list": [ 00:33:18.754 { 00:33:18.754 "name": "BaseBdev1", 00:33:18.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.754 "is_configured": false, 00:33:18.754 "data_offset": 0, 00:33:18.754 "data_size": 0 00:33:18.754 }, 00:33:18.754 { 00:33:18.754 "name": "BaseBdev2", 00:33:18.754 "uuid": "9192d85a-5cbd-4f00-8bb0-0494c2ad5107", 00:33:18.754 "is_configured": true, 00:33:18.754 "data_offset": 2048, 00:33:18.754 "data_size": 63488 00:33:18.754 }, 00:33:18.754 { 00:33:18.754 "name": "BaseBdev3", 00:33:18.754 "uuid": "c400aa43-c01a-4c2b-a0fa-871ae54ccd01", 00:33:18.754 "is_configured": true, 00:33:18.754 "data_offset": 2048, 00:33:18.754 "data_size": 63488 00:33:18.754 } 00:33:18.754 ] 00:33:18.754 }' 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:18.754 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.014 [2024-12-09 23:14:59.587371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:19.014 "name": "Existed_Raid", 00:33:19.014 "uuid": "b5a42542-5017-44a2-bf2d-97add1483db6", 00:33:19.014 "strip_size_kb": 64, 00:33:19.014 "state": "configuring", 00:33:19.014 "raid_level": "raid0", 00:33:19.014 "superblock": true, 00:33:19.014 "num_base_bdevs": 3, 00:33:19.014 "num_base_bdevs_discovered": 1, 00:33:19.014 "num_base_bdevs_operational": 3, 00:33:19.014 "base_bdevs_list": [ 00:33:19.014 { 00:33:19.014 "name": "BaseBdev1", 00:33:19.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:19.014 "is_configured": false, 00:33:19.014 "data_offset": 0, 00:33:19.014 "data_size": 0 00:33:19.014 }, 00:33:19.014 { 00:33:19.014 "name": null, 00:33:19.014 "uuid": "9192d85a-5cbd-4f00-8bb0-0494c2ad5107", 00:33:19.014 "is_configured": false, 00:33:19.014 "data_offset": 0, 00:33:19.014 "data_size": 63488 00:33:19.014 }, 00:33:19.014 { 00:33:19.014 "name": "BaseBdev3", 00:33:19.014 "uuid": "c400aa43-c01a-4c2b-a0fa-871ae54ccd01", 00:33:19.014 "is_configured": true, 00:33:19.014 "data_offset": 2048, 00:33:19.014 "data_size": 63488 00:33:19.014 } 00:33:19.014 ] 00:33:19.014 }' 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:19.014 23:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.582 [2024-12-09 23:15:00.130832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:19.582 BaseBdev1 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.582 [ 00:33:19.582 { 00:33:19.582 "name": "BaseBdev1", 00:33:19.582 "aliases": [ 00:33:19.582 "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19" 00:33:19.582 ], 00:33:19.582 "product_name": "Malloc disk", 00:33:19.582 "block_size": 512, 00:33:19.582 "num_blocks": 65536, 00:33:19.582 "uuid": "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19", 00:33:19.582 "assigned_rate_limits": { 00:33:19.582 "rw_ios_per_sec": 0, 00:33:19.582 "rw_mbytes_per_sec": 0, 00:33:19.582 "r_mbytes_per_sec": 0, 00:33:19.582 "w_mbytes_per_sec": 0 00:33:19.582 }, 00:33:19.582 "claimed": true, 00:33:19.582 "claim_type": "exclusive_write", 00:33:19.582 "zoned": false, 00:33:19.582 "supported_io_types": { 00:33:19.582 "read": true, 00:33:19.582 "write": true, 00:33:19.582 "unmap": true, 00:33:19.582 "flush": true, 00:33:19.582 "reset": true, 00:33:19.582 "nvme_admin": false, 00:33:19.582 "nvme_io": false, 00:33:19.582 "nvme_io_md": false, 00:33:19.582 "write_zeroes": true, 00:33:19.582 "zcopy": true, 00:33:19.582 "get_zone_info": false, 00:33:19.582 "zone_management": false, 00:33:19.582 "zone_append": false, 00:33:19.582 "compare": false, 00:33:19.582 "compare_and_write": false, 00:33:19.582 "abort": true, 00:33:19.582 "seek_hole": false, 00:33:19.582 "seek_data": false, 00:33:19.582 "copy": true, 00:33:19.582 "nvme_iov_md": false 00:33:19.582 }, 00:33:19.582 "memory_domains": [ 00:33:19.582 { 00:33:19.582 "dma_device_id": "system", 00:33:19.582 "dma_device_type": 1 00:33:19.582 }, 00:33:19.582 { 00:33:19.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.582 "dma_device_type": 2 00:33:19.582 } 00:33:19.582 ], 00:33:19.582 "driver_specific": {} 00:33:19.582 } 00:33:19.582 ] 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:19.582 "name": "Existed_Raid", 00:33:19.582 "uuid": "b5a42542-5017-44a2-bf2d-97add1483db6", 00:33:19.582 "strip_size_kb": 64, 00:33:19.582 "state": "configuring", 00:33:19.582 "raid_level": "raid0", 00:33:19.582 "superblock": true, 00:33:19.582 "num_base_bdevs": 3, 00:33:19.582 "num_base_bdevs_discovered": 2, 00:33:19.582 "num_base_bdevs_operational": 3, 00:33:19.582 "base_bdevs_list": [ 00:33:19.582 { 00:33:19.582 "name": "BaseBdev1", 00:33:19.582 "uuid": "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19", 00:33:19.582 "is_configured": true, 00:33:19.582 "data_offset": 2048, 00:33:19.582 "data_size": 63488 00:33:19.582 }, 00:33:19.582 { 00:33:19.582 "name": null, 00:33:19.582 "uuid": "9192d85a-5cbd-4f00-8bb0-0494c2ad5107", 00:33:19.582 "is_configured": false, 00:33:19.582 "data_offset": 0, 00:33:19.582 "data_size": 63488 00:33:19.582 }, 00:33:19.582 { 00:33:19.582 "name": "BaseBdev3", 00:33:19.582 "uuid": "c400aa43-c01a-4c2b-a0fa-871ae54ccd01", 00:33:19.582 "is_configured": true, 00:33:19.582 "data_offset": 2048, 00:33:19.582 "data_size": 63488 00:33:19.582 } 00:33:19.582 ] 00:33:19.582 }' 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:19.582 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.150 [2024-12-09 23:15:00.634266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:20.150 "name": "Existed_Raid", 00:33:20.150 "uuid": "b5a42542-5017-44a2-bf2d-97add1483db6", 00:33:20.150 "strip_size_kb": 64, 00:33:20.150 "state": "configuring", 00:33:20.150 "raid_level": "raid0", 00:33:20.150 "superblock": true, 00:33:20.150 "num_base_bdevs": 3, 00:33:20.150 "num_base_bdevs_discovered": 1, 00:33:20.150 "num_base_bdevs_operational": 3, 00:33:20.150 "base_bdevs_list": [ 00:33:20.150 { 00:33:20.150 "name": "BaseBdev1", 00:33:20.150 "uuid": "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19", 00:33:20.150 "is_configured": true, 00:33:20.150 "data_offset": 2048, 00:33:20.150 "data_size": 63488 00:33:20.150 }, 00:33:20.150 { 00:33:20.150 "name": null, 00:33:20.150 "uuid": "9192d85a-5cbd-4f00-8bb0-0494c2ad5107", 00:33:20.150 "is_configured": false, 00:33:20.150 "data_offset": 0, 00:33:20.150 "data_size": 63488 00:33:20.150 }, 00:33:20.150 { 00:33:20.150 "name": null, 00:33:20.150 "uuid": "c400aa43-c01a-4c2b-a0fa-871ae54ccd01", 00:33:20.150 "is_configured": false, 00:33:20.150 "data_offset": 0, 00:33:20.150 "data_size": 63488 00:33:20.150 } 00:33:20.150 ] 00:33:20.150 }' 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:20.150 23:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.719 [2024-12-09 23:15:01.110311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:20.719 "name": "Existed_Raid", 00:33:20.719 "uuid": "b5a42542-5017-44a2-bf2d-97add1483db6", 00:33:20.719 "strip_size_kb": 64, 00:33:20.719 "state": "configuring", 00:33:20.719 "raid_level": "raid0", 00:33:20.719 "superblock": true, 00:33:20.719 "num_base_bdevs": 3, 00:33:20.719 "num_base_bdevs_discovered": 2, 00:33:20.719 "num_base_bdevs_operational": 3, 00:33:20.719 "base_bdevs_list": [ 00:33:20.719 { 00:33:20.719 "name": "BaseBdev1", 00:33:20.719 "uuid": "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19", 00:33:20.719 "is_configured": true, 00:33:20.719 "data_offset": 2048, 00:33:20.719 "data_size": 63488 00:33:20.719 }, 00:33:20.719 { 00:33:20.719 "name": null, 00:33:20.719 "uuid": "9192d85a-5cbd-4f00-8bb0-0494c2ad5107", 00:33:20.719 "is_configured": false, 00:33:20.719 "data_offset": 0, 00:33:20.719 "data_size": 63488 00:33:20.719 }, 00:33:20.719 { 00:33:20.719 "name": "BaseBdev3", 00:33:20.719 "uuid": "c400aa43-c01a-4c2b-a0fa-871ae54ccd01", 00:33:20.719 "is_configured": true, 00:33:20.719 "data_offset": 2048, 00:33:20.719 "data_size": 63488 00:33:20.719 } 00:33:20.719 ] 00:33:20.719 }' 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:20.719 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.978 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:20.978 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.978 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.978 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.978 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.978 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:20.978 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:20.978 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.978 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.978 [2024-12-09 23:15:01.598341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:21.237 "name": "Existed_Raid", 00:33:21.237 "uuid": "b5a42542-5017-44a2-bf2d-97add1483db6", 00:33:21.237 "strip_size_kb": 64, 00:33:21.237 "state": "configuring", 00:33:21.237 "raid_level": "raid0", 00:33:21.237 "superblock": true, 00:33:21.237 "num_base_bdevs": 3, 00:33:21.237 "num_base_bdevs_discovered": 1, 00:33:21.237 "num_base_bdevs_operational": 3, 00:33:21.237 "base_bdevs_list": [ 00:33:21.237 { 00:33:21.237 "name": null, 00:33:21.237 "uuid": "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19", 00:33:21.237 "is_configured": false, 00:33:21.237 "data_offset": 0, 00:33:21.237 "data_size": 63488 00:33:21.237 }, 00:33:21.237 { 00:33:21.237 "name": null, 00:33:21.237 "uuid": "9192d85a-5cbd-4f00-8bb0-0494c2ad5107", 00:33:21.237 "is_configured": false, 00:33:21.237 "data_offset": 0, 00:33:21.237 "data_size": 63488 00:33:21.237 }, 00:33:21.237 { 00:33:21.237 "name": "BaseBdev3", 00:33:21.237 "uuid": "c400aa43-c01a-4c2b-a0fa-871ae54ccd01", 00:33:21.237 "is_configured": true, 00:33:21.237 "data_offset": 2048, 00:33:21.237 "data_size": 63488 00:33:21.237 } 00:33:21.237 ] 00:33:21.237 }' 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:21.237 23:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.496 [2024-12-09 23:15:02.070837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:21.496 "name": "Existed_Raid", 00:33:21.496 "uuid": "b5a42542-5017-44a2-bf2d-97add1483db6", 00:33:21.496 "strip_size_kb": 64, 00:33:21.496 "state": "configuring", 00:33:21.496 "raid_level": "raid0", 00:33:21.496 "superblock": true, 00:33:21.496 "num_base_bdevs": 3, 00:33:21.496 "num_base_bdevs_discovered": 2, 00:33:21.496 "num_base_bdevs_operational": 3, 00:33:21.496 "base_bdevs_list": [ 00:33:21.496 { 00:33:21.496 "name": null, 00:33:21.496 "uuid": "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19", 00:33:21.496 "is_configured": false, 00:33:21.496 "data_offset": 0, 00:33:21.496 "data_size": 63488 00:33:21.496 }, 00:33:21.496 { 00:33:21.496 "name": "BaseBdev2", 00:33:21.496 "uuid": "9192d85a-5cbd-4f00-8bb0-0494c2ad5107", 00:33:21.496 "is_configured": true, 00:33:21.496 "data_offset": 2048, 00:33:21.496 "data_size": 63488 00:33:21.496 }, 00:33:21.496 { 00:33:21.496 "name": "BaseBdev3", 00:33:21.496 "uuid": "c400aa43-c01a-4c2b-a0fa-871ae54ccd01", 00:33:21.496 "is_configured": true, 00:33:21.496 "data_offset": 2048, 00:33:21.496 "data_size": 63488 00:33:21.496 } 00:33:21.496 ] 00:33:21.496 }' 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:21.496 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1b15fad5-44dc-44ff-8e71-45c9e9cdcd19 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.064 [2024-12-09 23:15:02.572142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:22.064 [2024-12-09 23:15:02.572433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:22.064 [2024-12-09 23:15:02.572454] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:22.064 [2024-12-09 23:15:02.572727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:22.064 [2024-12-09 23:15:02.572872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:22.064 [2024-12-09 23:15:02.572882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:33:22.064 [2024-12-09 23:15:02.573009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:22.064 NewBaseBdev 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.064 [ 00:33:22.064 { 00:33:22.064 "name": "NewBaseBdev", 00:33:22.064 "aliases": [ 00:33:22.064 "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19" 00:33:22.064 ], 00:33:22.064 "product_name": "Malloc disk", 00:33:22.064 "block_size": 512, 00:33:22.064 "num_blocks": 65536, 00:33:22.064 "uuid": "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19", 00:33:22.064 "assigned_rate_limits": { 00:33:22.064 "rw_ios_per_sec": 0, 00:33:22.064 "rw_mbytes_per_sec": 0, 00:33:22.064 "r_mbytes_per_sec": 0, 00:33:22.064 "w_mbytes_per_sec": 0 00:33:22.064 }, 00:33:22.064 "claimed": true, 00:33:22.064 "claim_type": "exclusive_write", 00:33:22.064 "zoned": false, 00:33:22.064 "supported_io_types": { 00:33:22.064 "read": true, 00:33:22.064 "write": true, 00:33:22.064 "unmap": true, 00:33:22.064 "flush": true, 00:33:22.064 "reset": true, 00:33:22.064 "nvme_admin": false, 00:33:22.064 "nvme_io": false, 00:33:22.064 "nvme_io_md": false, 00:33:22.064 "write_zeroes": true, 00:33:22.064 "zcopy": true, 00:33:22.064 "get_zone_info": false, 00:33:22.064 "zone_management": false, 00:33:22.064 "zone_append": false, 00:33:22.064 "compare": false, 00:33:22.064 "compare_and_write": false, 00:33:22.064 "abort": true, 00:33:22.064 "seek_hole": false, 00:33:22.064 "seek_data": false, 00:33:22.064 "copy": true, 00:33:22.064 "nvme_iov_md": false 00:33:22.064 }, 00:33:22.064 "memory_domains": [ 00:33:22.064 { 00:33:22.064 "dma_device_id": "system", 00:33:22.064 "dma_device_type": 1 00:33:22.064 }, 00:33:22.064 { 00:33:22.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:22.064 "dma_device_type": 2 00:33:22.064 } 00:33:22.064 ], 00:33:22.064 "driver_specific": {} 00:33:22.064 } 00:33:22.064 ] 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:22.064 "name": "Existed_Raid", 00:33:22.064 "uuid": "b5a42542-5017-44a2-bf2d-97add1483db6", 00:33:22.064 "strip_size_kb": 64, 00:33:22.064 "state": "online", 00:33:22.064 "raid_level": "raid0", 00:33:22.064 "superblock": true, 00:33:22.064 "num_base_bdevs": 3, 00:33:22.064 "num_base_bdevs_discovered": 3, 00:33:22.064 "num_base_bdevs_operational": 3, 00:33:22.064 "base_bdevs_list": [ 00:33:22.064 { 00:33:22.064 "name": "NewBaseBdev", 00:33:22.064 "uuid": "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19", 00:33:22.064 "is_configured": true, 00:33:22.064 "data_offset": 2048, 00:33:22.064 "data_size": 63488 00:33:22.064 }, 00:33:22.064 { 00:33:22.064 "name": "BaseBdev2", 00:33:22.064 "uuid": "9192d85a-5cbd-4f00-8bb0-0494c2ad5107", 00:33:22.064 "is_configured": true, 00:33:22.064 "data_offset": 2048, 00:33:22.064 "data_size": 63488 00:33:22.064 }, 00:33:22.064 { 00:33:22.064 "name": "BaseBdev3", 00:33:22.064 "uuid": "c400aa43-c01a-4c2b-a0fa-871ae54ccd01", 00:33:22.064 "is_configured": true, 00:33:22.064 "data_offset": 2048, 00:33:22.064 "data_size": 63488 00:33:22.064 } 00:33:22.064 ] 00:33:22.064 }' 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:22.064 23:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.632 [2024-12-09 23:15:03.079845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.632 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:22.632 "name": "Existed_Raid", 00:33:22.632 "aliases": [ 00:33:22.632 "b5a42542-5017-44a2-bf2d-97add1483db6" 00:33:22.632 ], 00:33:22.632 "product_name": "Raid Volume", 00:33:22.632 "block_size": 512, 00:33:22.632 "num_blocks": 190464, 00:33:22.632 "uuid": "b5a42542-5017-44a2-bf2d-97add1483db6", 00:33:22.632 "assigned_rate_limits": { 00:33:22.632 "rw_ios_per_sec": 0, 00:33:22.632 "rw_mbytes_per_sec": 0, 00:33:22.632 "r_mbytes_per_sec": 0, 00:33:22.632 "w_mbytes_per_sec": 0 00:33:22.632 }, 00:33:22.632 "claimed": false, 00:33:22.632 "zoned": false, 00:33:22.632 "supported_io_types": { 00:33:22.632 "read": true, 00:33:22.632 "write": true, 00:33:22.632 "unmap": true, 00:33:22.632 "flush": true, 00:33:22.632 "reset": true, 00:33:22.632 "nvme_admin": false, 00:33:22.632 "nvme_io": false, 00:33:22.632 "nvme_io_md": false, 00:33:22.632 "write_zeroes": true, 00:33:22.632 "zcopy": false, 00:33:22.632 "get_zone_info": false, 00:33:22.632 "zone_management": false, 00:33:22.632 "zone_append": false, 00:33:22.632 "compare": false, 00:33:22.632 "compare_and_write": false, 00:33:22.632 "abort": false, 00:33:22.632 "seek_hole": false, 00:33:22.632 "seek_data": false, 00:33:22.632 "copy": false, 00:33:22.632 "nvme_iov_md": false 00:33:22.632 }, 00:33:22.632 "memory_domains": [ 00:33:22.632 { 00:33:22.632 "dma_device_id": "system", 00:33:22.632 "dma_device_type": 1 00:33:22.632 }, 00:33:22.632 { 00:33:22.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:22.632 "dma_device_type": 2 00:33:22.632 }, 00:33:22.632 { 00:33:22.632 "dma_device_id": "system", 00:33:22.632 "dma_device_type": 1 00:33:22.632 }, 00:33:22.632 { 00:33:22.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:22.632 "dma_device_type": 2 00:33:22.632 }, 00:33:22.632 { 00:33:22.632 "dma_device_id": "system", 00:33:22.632 "dma_device_type": 1 00:33:22.632 }, 00:33:22.632 { 00:33:22.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:22.632 "dma_device_type": 2 00:33:22.632 } 00:33:22.632 ], 00:33:22.632 "driver_specific": { 00:33:22.632 "raid": { 00:33:22.632 "uuid": "b5a42542-5017-44a2-bf2d-97add1483db6", 00:33:22.632 "strip_size_kb": 64, 00:33:22.632 "state": "online", 00:33:22.632 "raid_level": "raid0", 00:33:22.632 "superblock": true, 00:33:22.632 "num_base_bdevs": 3, 00:33:22.632 "num_base_bdevs_discovered": 3, 00:33:22.632 "num_base_bdevs_operational": 3, 00:33:22.632 "base_bdevs_list": [ 00:33:22.632 { 00:33:22.632 "name": "NewBaseBdev", 00:33:22.632 "uuid": "1b15fad5-44dc-44ff-8e71-45c9e9cdcd19", 00:33:22.632 "is_configured": true, 00:33:22.632 "data_offset": 2048, 00:33:22.632 "data_size": 63488 00:33:22.632 }, 00:33:22.632 { 00:33:22.632 "name": "BaseBdev2", 00:33:22.632 "uuid": "9192d85a-5cbd-4f00-8bb0-0494c2ad5107", 00:33:22.632 "is_configured": true, 00:33:22.632 "data_offset": 2048, 00:33:22.632 "data_size": 63488 00:33:22.632 }, 00:33:22.633 { 00:33:22.633 "name": "BaseBdev3", 00:33:22.633 "uuid": "c400aa43-c01a-4c2b-a0fa-871ae54ccd01", 00:33:22.633 "is_configured": true, 00:33:22.633 "data_offset": 2048, 00:33:22.633 "data_size": 63488 00:33:22.633 } 00:33:22.633 ] 00:33:22.633 } 00:33:22.633 } 00:33:22.633 }' 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:22.633 BaseBdev2 00:33:22.633 BaseBdev3' 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:22.633 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.891 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:22.891 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.892 [2024-12-09 23:15:03.343185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:22.892 [2024-12-09 23:15:03.343220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:22.892 [2024-12-09 23:15:03.343326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:22.892 [2024-12-09 23:15:03.343386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:22.892 [2024-12-09 23:15:03.343415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64327 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64327 ']' 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64327 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64327 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64327' 00:33:22.892 killing process with pid 64327 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64327 00:33:22.892 [2024-12-09 23:15:03.401223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:22.892 23:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64327 00:33:23.150 [2024-12-09 23:15:03.727525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:24.546 23:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:33:24.546 00:33:24.546 real 0m10.491s 00:33:24.546 user 0m16.549s 00:33:24.546 sys 0m2.017s 00:33:24.546 23:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.546 ************************************ 00:33:24.546 END TEST raid_state_function_test_sb 00:33:24.546 ************************************ 00:33:24.546 23:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:24.546 23:15:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:33:24.546 23:15:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:24.546 23:15:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.546 23:15:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:24.546 ************************************ 00:33:24.546 START TEST raid_superblock_test 00:33:24.546 ************************************ 00:33:24.546 23:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:33:24.546 23:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:33:24.546 23:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:33:24.546 23:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:24.546 23:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:24.546 23:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:24.546 23:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:24.546 23:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:24.546 23:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64947 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64947 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64947 ']' 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.546 23:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.546 [2024-12-09 23:15:05.097742] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:33:24.546 [2024-12-09 23:15:05.097873] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64947 ] 00:33:24.803 [2024-12-09 23:15:05.282220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.803 [2024-12-09 23:15:05.404936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.059 [2024-12-09 23:15:05.620987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:25.060 [2024-12-09 23:15:05.621029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.625 malloc1 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.625 [2024-12-09 23:15:06.078621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:25.625 [2024-12-09 23:15:06.078852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:25.625 [2024-12-09 23:15:06.078918] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:25.625 [2024-12-09 23:15:06.079017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:25.625 [2024-12-09 23:15:06.081659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:25.625 [2024-12-09 23:15:06.081810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:25.625 pt1 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.625 malloc2 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.625 [2024-12-09 23:15:06.138840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:25.625 [2024-12-09 23:15:06.139031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:25.625 [2024-12-09 23:15:06.139093] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:25.625 [2024-12-09 23:15:06.139168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:25.625 [2024-12-09 23:15:06.141621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:25.625 [2024-12-09 23:15:06.141757] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:25.625 pt2 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.625 malloc3 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.625 [2024-12-09 23:15:06.211316] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:25.625 [2024-12-09 23:15:06.211507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:25.625 [2024-12-09 23:15:06.211587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:25.625 [2024-12-09 23:15:06.211668] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:25.625 [2024-12-09 23:15:06.214201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:25.625 [2024-12-09 23:15:06.214341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:25.625 pt3 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.625 [2024-12-09 23:15:06.223342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:25.625 [2024-12-09 23:15:06.225433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:25.625 [2024-12-09 23:15:06.225503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:25.625 [2024-12-09 23:15:06.225651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:25.625 [2024-12-09 23:15:06.225666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:25.625 [2024-12-09 23:15:06.225930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:25.625 [2024-12-09 23:15:06.226085] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:25.625 [2024-12-09 23:15:06.226096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:25.625 [2024-12-09 23:15:06.226268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.625 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.883 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:25.883 "name": "raid_bdev1", 00:33:25.883 "uuid": "6bde0291-f2ed-4a81-80d3-391503c63feb", 00:33:25.883 "strip_size_kb": 64, 00:33:25.883 "state": "online", 00:33:25.883 "raid_level": "raid0", 00:33:25.883 "superblock": true, 00:33:25.883 "num_base_bdevs": 3, 00:33:25.883 "num_base_bdevs_discovered": 3, 00:33:25.883 "num_base_bdevs_operational": 3, 00:33:25.883 "base_bdevs_list": [ 00:33:25.883 { 00:33:25.883 "name": "pt1", 00:33:25.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:25.883 "is_configured": true, 00:33:25.883 "data_offset": 2048, 00:33:25.883 "data_size": 63488 00:33:25.883 }, 00:33:25.883 { 00:33:25.883 "name": "pt2", 00:33:25.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:25.883 "is_configured": true, 00:33:25.883 "data_offset": 2048, 00:33:25.883 "data_size": 63488 00:33:25.883 }, 00:33:25.883 { 00:33:25.883 "name": "pt3", 00:33:25.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:25.883 "is_configured": true, 00:33:25.883 "data_offset": 2048, 00:33:25.883 "data_size": 63488 00:33:25.883 } 00:33:25.883 ] 00:33:25.883 }' 00:33:25.883 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:25.883 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.142 [2024-12-09 23:15:06.667019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:26.142 "name": "raid_bdev1", 00:33:26.142 "aliases": [ 00:33:26.142 "6bde0291-f2ed-4a81-80d3-391503c63feb" 00:33:26.142 ], 00:33:26.142 "product_name": "Raid Volume", 00:33:26.142 "block_size": 512, 00:33:26.142 "num_blocks": 190464, 00:33:26.142 "uuid": "6bde0291-f2ed-4a81-80d3-391503c63feb", 00:33:26.142 "assigned_rate_limits": { 00:33:26.142 "rw_ios_per_sec": 0, 00:33:26.142 "rw_mbytes_per_sec": 0, 00:33:26.142 "r_mbytes_per_sec": 0, 00:33:26.142 "w_mbytes_per_sec": 0 00:33:26.142 }, 00:33:26.142 "claimed": false, 00:33:26.142 "zoned": false, 00:33:26.142 "supported_io_types": { 00:33:26.142 "read": true, 00:33:26.142 "write": true, 00:33:26.142 "unmap": true, 00:33:26.142 "flush": true, 00:33:26.142 "reset": true, 00:33:26.142 "nvme_admin": false, 00:33:26.142 "nvme_io": false, 00:33:26.142 "nvme_io_md": false, 00:33:26.142 "write_zeroes": true, 00:33:26.142 "zcopy": false, 00:33:26.142 "get_zone_info": false, 00:33:26.142 "zone_management": false, 00:33:26.142 "zone_append": false, 00:33:26.142 "compare": false, 00:33:26.142 "compare_and_write": false, 00:33:26.142 "abort": false, 00:33:26.142 "seek_hole": false, 00:33:26.142 "seek_data": false, 00:33:26.142 "copy": false, 00:33:26.142 "nvme_iov_md": false 00:33:26.142 }, 00:33:26.142 "memory_domains": [ 00:33:26.142 { 00:33:26.142 "dma_device_id": "system", 00:33:26.142 "dma_device_type": 1 00:33:26.142 }, 00:33:26.142 { 00:33:26.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:26.142 "dma_device_type": 2 00:33:26.142 }, 00:33:26.142 { 00:33:26.142 "dma_device_id": "system", 00:33:26.142 "dma_device_type": 1 00:33:26.142 }, 00:33:26.142 { 00:33:26.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:26.142 "dma_device_type": 2 00:33:26.142 }, 00:33:26.142 { 00:33:26.142 "dma_device_id": "system", 00:33:26.142 "dma_device_type": 1 00:33:26.142 }, 00:33:26.142 { 00:33:26.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:26.142 "dma_device_type": 2 00:33:26.142 } 00:33:26.142 ], 00:33:26.142 "driver_specific": { 00:33:26.142 "raid": { 00:33:26.142 "uuid": "6bde0291-f2ed-4a81-80d3-391503c63feb", 00:33:26.142 "strip_size_kb": 64, 00:33:26.142 "state": "online", 00:33:26.142 "raid_level": "raid0", 00:33:26.142 "superblock": true, 00:33:26.142 "num_base_bdevs": 3, 00:33:26.142 "num_base_bdevs_discovered": 3, 00:33:26.142 "num_base_bdevs_operational": 3, 00:33:26.142 "base_bdevs_list": [ 00:33:26.142 { 00:33:26.142 "name": "pt1", 00:33:26.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:26.142 "is_configured": true, 00:33:26.142 "data_offset": 2048, 00:33:26.142 "data_size": 63488 00:33:26.142 }, 00:33:26.142 { 00:33:26.142 "name": "pt2", 00:33:26.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:26.142 "is_configured": true, 00:33:26.142 "data_offset": 2048, 00:33:26.142 "data_size": 63488 00:33:26.142 }, 00:33:26.142 { 00:33:26.142 "name": "pt3", 00:33:26.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:26.142 "is_configured": true, 00:33:26.142 "data_offset": 2048, 00:33:26.142 "data_size": 63488 00:33:26.142 } 00:33:26.142 ] 00:33:26.142 } 00:33:26.142 } 00:33:26.142 }' 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:26.142 pt2 00:33:26.142 pt3' 00:33:26.142 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:26.403 [2024-12-09 23:15:06.938607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6bde0291-f2ed-4a81-80d3-391503c63feb 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6bde0291-f2ed-4a81-80d3-391503c63feb ']' 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.403 [2024-12-09 23:15:06.986268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:26.403 [2024-12-09 23:15:06.986304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:26.403 [2024-12-09 23:15:06.986389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:26.403 [2024-12-09 23:15:06.986485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:26.403 [2024-12-09 23:15:06.986498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.403 23:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.403 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.403 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:26.403 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:26.403 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:26.403 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:26.403 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.403 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.661 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.662 [2024-12-09 23:15:07.122362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:26.662 [2024-12-09 23:15:07.124799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:26.662 [2024-12-09 23:15:07.124862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:33:26.662 [2024-12-09 23:15:07.124923] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:26.662 [2024-12-09 23:15:07.124987] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:26.662 [2024-12-09 23:15:07.125011] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:33:26.662 [2024-12-09 23:15:07.125035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:26.662 [2024-12-09 23:15:07.125049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:26.662 request: 00:33:26.662 { 00:33:26.662 "name": "raid_bdev1", 00:33:26.662 "raid_level": "raid0", 00:33:26.662 "base_bdevs": [ 00:33:26.662 "malloc1", 00:33:26.662 "malloc2", 00:33:26.662 "malloc3" 00:33:26.662 ], 00:33:26.662 "strip_size_kb": 64, 00:33:26.662 "superblock": false, 00:33:26.662 "method": "bdev_raid_create", 00:33:26.662 "req_id": 1 00:33:26.662 } 00:33:26.662 Got JSON-RPC error response 00:33:26.662 response: 00:33:26.662 { 00:33:26.662 "code": -17, 00:33:26.662 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:26.662 } 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.662 [2024-12-09 23:15:07.190298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:26.662 [2024-12-09 23:15:07.190369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.662 [2024-12-09 23:15:07.190405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:26.662 [2024-12-09 23:15:07.190418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.662 [2024-12-09 23:15:07.193071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.662 [2024-12-09 23:15:07.193248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:26.662 [2024-12-09 23:15:07.193384] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:26.662 [2024-12-09 23:15:07.193460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:26.662 pt1 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:26.662 "name": "raid_bdev1", 00:33:26.662 "uuid": "6bde0291-f2ed-4a81-80d3-391503c63feb", 00:33:26.662 "strip_size_kb": 64, 00:33:26.662 "state": "configuring", 00:33:26.662 "raid_level": "raid0", 00:33:26.662 "superblock": true, 00:33:26.662 "num_base_bdevs": 3, 00:33:26.662 "num_base_bdevs_discovered": 1, 00:33:26.662 "num_base_bdevs_operational": 3, 00:33:26.662 "base_bdevs_list": [ 00:33:26.662 { 00:33:26.662 "name": "pt1", 00:33:26.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:26.662 "is_configured": true, 00:33:26.662 "data_offset": 2048, 00:33:26.662 "data_size": 63488 00:33:26.662 }, 00:33:26.662 { 00:33:26.662 "name": null, 00:33:26.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:26.662 "is_configured": false, 00:33:26.662 "data_offset": 2048, 00:33:26.662 "data_size": 63488 00:33:26.662 }, 00:33:26.662 { 00:33:26.662 "name": null, 00:33:26.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:26.662 "is_configured": false, 00:33:26.662 "data_offset": 2048, 00:33:26.662 "data_size": 63488 00:33:26.662 } 00:33:26.662 ] 00:33:26.662 }' 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:26.662 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.234 [2024-12-09 23:15:07.646308] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:27.234 [2024-12-09 23:15:07.646401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:27.234 [2024-12-09 23:15:07.646431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:33:27.234 [2024-12-09 23:15:07.646444] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:27.234 [2024-12-09 23:15:07.646920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:27.234 [2024-12-09 23:15:07.646941] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:27.234 [2024-12-09 23:15:07.647036] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:27.234 [2024-12-09 23:15:07.647067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:27.234 pt2 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.234 [2024-12-09 23:15:07.658304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:27.234 "name": "raid_bdev1", 00:33:27.234 "uuid": "6bde0291-f2ed-4a81-80d3-391503c63feb", 00:33:27.234 "strip_size_kb": 64, 00:33:27.234 "state": "configuring", 00:33:27.234 "raid_level": "raid0", 00:33:27.234 "superblock": true, 00:33:27.234 "num_base_bdevs": 3, 00:33:27.234 "num_base_bdevs_discovered": 1, 00:33:27.234 "num_base_bdevs_operational": 3, 00:33:27.234 "base_bdevs_list": [ 00:33:27.234 { 00:33:27.234 "name": "pt1", 00:33:27.234 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:27.234 "is_configured": true, 00:33:27.234 "data_offset": 2048, 00:33:27.234 "data_size": 63488 00:33:27.234 }, 00:33:27.234 { 00:33:27.234 "name": null, 00:33:27.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:27.234 "is_configured": false, 00:33:27.234 "data_offset": 0, 00:33:27.234 "data_size": 63488 00:33:27.234 }, 00:33:27.234 { 00:33:27.234 "name": null, 00:33:27.234 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:27.234 "is_configured": false, 00:33:27.234 "data_offset": 2048, 00:33:27.234 "data_size": 63488 00:33:27.234 } 00:33:27.234 ] 00:33:27.234 }' 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:27.234 23:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.492 [2024-12-09 23:15:08.074300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:27.492 [2024-12-09 23:15:08.074387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:27.492 [2024-12-09 23:15:08.074421] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:27.492 [2024-12-09 23:15:08.074437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:27.492 [2024-12-09 23:15:08.074938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:27.492 [2024-12-09 23:15:08.074965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:27.492 [2024-12-09 23:15:08.075052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:27.492 [2024-12-09 23:15:08.075079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:27.492 pt2 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.492 [2024-12-09 23:15:08.086288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:27.492 [2024-12-09 23:15:08.086353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:27.492 [2024-12-09 23:15:08.086374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:27.492 [2024-12-09 23:15:08.086389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:27.492 [2024-12-09 23:15:08.086869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:27.492 [2024-12-09 23:15:08.086897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:27.492 [2024-12-09 23:15:08.086975] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:27.492 [2024-12-09 23:15:08.087000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:27.492 [2024-12-09 23:15:08.087131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:27.492 [2024-12-09 23:15:08.087145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:27.492 [2024-12-09 23:15:08.087436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:27.492 [2024-12-09 23:15:08.087602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:27.492 [2024-12-09 23:15:08.087617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:27.492 [2024-12-09 23:15:08.087764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:27.492 pt3 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.492 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.750 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:27.750 "name": "raid_bdev1", 00:33:27.750 "uuid": "6bde0291-f2ed-4a81-80d3-391503c63feb", 00:33:27.750 "strip_size_kb": 64, 00:33:27.750 "state": "online", 00:33:27.750 "raid_level": "raid0", 00:33:27.750 "superblock": true, 00:33:27.750 "num_base_bdevs": 3, 00:33:27.750 "num_base_bdevs_discovered": 3, 00:33:27.750 "num_base_bdevs_operational": 3, 00:33:27.750 "base_bdevs_list": [ 00:33:27.750 { 00:33:27.750 "name": "pt1", 00:33:27.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:27.750 "is_configured": true, 00:33:27.751 "data_offset": 2048, 00:33:27.751 "data_size": 63488 00:33:27.751 }, 00:33:27.751 { 00:33:27.751 "name": "pt2", 00:33:27.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:27.751 "is_configured": true, 00:33:27.751 "data_offset": 2048, 00:33:27.751 "data_size": 63488 00:33:27.751 }, 00:33:27.751 { 00:33:27.751 "name": "pt3", 00:33:27.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:27.751 "is_configured": true, 00:33:27.751 "data_offset": 2048, 00:33:27.751 "data_size": 63488 00:33:27.751 } 00:33:27.751 ] 00:33:27.751 }' 00:33:27.751 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:27.751 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:28.011 [2024-12-09 23:15:08.538651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:28.011 "name": "raid_bdev1", 00:33:28.011 "aliases": [ 00:33:28.011 "6bde0291-f2ed-4a81-80d3-391503c63feb" 00:33:28.011 ], 00:33:28.011 "product_name": "Raid Volume", 00:33:28.011 "block_size": 512, 00:33:28.011 "num_blocks": 190464, 00:33:28.011 "uuid": "6bde0291-f2ed-4a81-80d3-391503c63feb", 00:33:28.011 "assigned_rate_limits": { 00:33:28.011 "rw_ios_per_sec": 0, 00:33:28.011 "rw_mbytes_per_sec": 0, 00:33:28.011 "r_mbytes_per_sec": 0, 00:33:28.011 "w_mbytes_per_sec": 0 00:33:28.011 }, 00:33:28.011 "claimed": false, 00:33:28.011 "zoned": false, 00:33:28.011 "supported_io_types": { 00:33:28.011 "read": true, 00:33:28.011 "write": true, 00:33:28.011 "unmap": true, 00:33:28.011 "flush": true, 00:33:28.011 "reset": true, 00:33:28.011 "nvme_admin": false, 00:33:28.011 "nvme_io": false, 00:33:28.011 "nvme_io_md": false, 00:33:28.011 "write_zeroes": true, 00:33:28.011 "zcopy": false, 00:33:28.011 "get_zone_info": false, 00:33:28.011 "zone_management": false, 00:33:28.011 "zone_append": false, 00:33:28.011 "compare": false, 00:33:28.011 "compare_and_write": false, 00:33:28.011 "abort": false, 00:33:28.011 "seek_hole": false, 00:33:28.011 "seek_data": false, 00:33:28.011 "copy": false, 00:33:28.011 "nvme_iov_md": false 00:33:28.011 }, 00:33:28.011 "memory_domains": [ 00:33:28.011 { 00:33:28.011 "dma_device_id": "system", 00:33:28.011 "dma_device_type": 1 00:33:28.011 }, 00:33:28.011 { 00:33:28.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:28.011 "dma_device_type": 2 00:33:28.011 }, 00:33:28.011 { 00:33:28.011 "dma_device_id": "system", 00:33:28.011 "dma_device_type": 1 00:33:28.011 }, 00:33:28.011 { 00:33:28.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:28.011 "dma_device_type": 2 00:33:28.011 }, 00:33:28.011 { 00:33:28.011 "dma_device_id": "system", 00:33:28.011 "dma_device_type": 1 00:33:28.011 }, 00:33:28.011 { 00:33:28.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:28.011 "dma_device_type": 2 00:33:28.011 } 00:33:28.011 ], 00:33:28.011 "driver_specific": { 00:33:28.011 "raid": { 00:33:28.011 "uuid": "6bde0291-f2ed-4a81-80d3-391503c63feb", 00:33:28.011 "strip_size_kb": 64, 00:33:28.011 "state": "online", 00:33:28.011 "raid_level": "raid0", 00:33:28.011 "superblock": true, 00:33:28.011 "num_base_bdevs": 3, 00:33:28.011 "num_base_bdevs_discovered": 3, 00:33:28.011 "num_base_bdevs_operational": 3, 00:33:28.011 "base_bdevs_list": [ 00:33:28.011 { 00:33:28.011 "name": "pt1", 00:33:28.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:28.011 "is_configured": true, 00:33:28.011 "data_offset": 2048, 00:33:28.011 "data_size": 63488 00:33:28.011 }, 00:33:28.011 { 00:33:28.011 "name": "pt2", 00:33:28.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:28.011 "is_configured": true, 00:33:28.011 "data_offset": 2048, 00:33:28.011 "data_size": 63488 00:33:28.011 }, 00:33:28.011 { 00:33:28.011 "name": "pt3", 00:33:28.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:28.011 "is_configured": true, 00:33:28.011 "data_offset": 2048, 00:33:28.011 "data_size": 63488 00:33:28.011 } 00:33:28.011 ] 00:33:28.011 } 00:33:28.011 } 00:33:28.011 }' 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:28.011 pt2 00:33:28.011 pt3' 00:33:28.011 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:33:28.268 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.269 [2024-12-09 23:15:08.822594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6bde0291-f2ed-4a81-80d3-391503c63feb '!=' 6bde0291-f2ed-4a81-80d3-391503c63feb ']' 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64947 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64947 ']' 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64947 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64947 00:33:28.269 killing process with pid 64947 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64947' 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64947 00:33:28.269 [2024-12-09 23:15:08.901252] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:28.269 [2024-12-09 23:15:08.901365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:28.269 23:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64947 00:33:28.269 [2024-12-09 23:15:08.901439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:28.269 [2024-12-09 23:15:08.901456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:28.834 [2024-12-09 23:15:09.212153] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:29.769 23:15:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:33:29.769 00:33:29.769 real 0m5.377s 00:33:29.769 user 0m7.738s 00:33:29.769 sys 0m1.036s 00:33:29.769 23:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.769 23:15:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.769 ************************************ 00:33:29.769 END TEST raid_superblock_test 00:33:29.769 ************************************ 00:33:30.027 23:15:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:33:30.027 23:15:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:30.027 23:15:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.027 23:15:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:30.027 ************************************ 00:33:30.027 START TEST raid_read_error_test 00:33:30.027 ************************************ 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:30.027 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QtyYJ5nKst 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65200 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65200 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65200 ']' 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.028 23:15:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.028 [2024-12-09 23:15:10.560434] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:33:30.028 [2024-12-09 23:15:10.560564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65200 ] 00:33:30.285 [2024-12-09 23:15:10.738862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.285 [2024-12-09 23:15:10.857815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.543 [2024-12-09 23:15:11.069978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:30.543 [2024-12-09 23:15:11.070016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:30.803 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:30.803 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:33:30.803 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:30.803 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:30.803 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.803 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.803 BaseBdev1_malloc 00:33:30.803 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.803 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:33:30.803 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.803 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.062 true 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.062 [2024-12-09 23:15:11.452605] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:33:31.062 [2024-12-09 23:15:11.452661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:31.062 [2024-12-09 23:15:11.452685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:31.062 [2024-12-09 23:15:11.452699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:31.062 [2024-12-09 23:15:11.455160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:31.062 [2024-12-09 23:15:11.455202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:31.062 BaseBdev1 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.062 BaseBdev2_malloc 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.062 true 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.062 [2024-12-09 23:15:11.523321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:31.062 [2024-12-09 23:15:11.523520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:31.062 [2024-12-09 23:15:11.523552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:31.062 [2024-12-09 23:15:11.523567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:31.062 [2024-12-09 23:15:11.526202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:31.062 [2024-12-09 23:15:11.526254] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:31.062 BaseBdev2 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.062 BaseBdev3_malloc 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.062 true 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.062 [2024-12-09 23:15:11.609533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:33:31.062 [2024-12-09 23:15:11.609596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:31.062 [2024-12-09 23:15:11.609620] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:31.062 [2024-12-09 23:15:11.609634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:31.062 [2024-12-09 23:15:11.612099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:31.062 [2024-12-09 23:15:11.612144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:31.062 BaseBdev3 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.062 [2024-12-09 23:15:11.621609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:31.062 [2024-12-09 23:15:11.623699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:31.062 [2024-12-09 23:15:11.623774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:31.062 [2024-12-09 23:15:11.623961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:31.062 [2024-12-09 23:15:11.623976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:31.062 [2024-12-09 23:15:11.624250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:33:31.062 [2024-12-09 23:15:11.624425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:31.062 [2024-12-09 23:15:11.624443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:31.062 [2024-12-09 23:15:11.624590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:31.062 "name": "raid_bdev1", 00:33:31.062 "uuid": "5a4481b6-aa51-496a-b0d8-ea8a5a29c358", 00:33:31.062 "strip_size_kb": 64, 00:33:31.062 "state": "online", 00:33:31.062 "raid_level": "raid0", 00:33:31.062 "superblock": true, 00:33:31.062 "num_base_bdevs": 3, 00:33:31.062 "num_base_bdevs_discovered": 3, 00:33:31.062 "num_base_bdevs_operational": 3, 00:33:31.062 "base_bdevs_list": [ 00:33:31.062 { 00:33:31.062 "name": "BaseBdev1", 00:33:31.062 "uuid": "34098be1-9176-54c8-89f0-9b7296de55a6", 00:33:31.062 "is_configured": true, 00:33:31.062 "data_offset": 2048, 00:33:31.062 "data_size": 63488 00:33:31.062 }, 00:33:31.062 { 00:33:31.062 "name": "BaseBdev2", 00:33:31.062 "uuid": "5e3971cf-ad78-52ed-900e-14980ef2a292", 00:33:31.062 "is_configured": true, 00:33:31.062 "data_offset": 2048, 00:33:31.062 "data_size": 63488 00:33:31.062 }, 00:33:31.062 { 00:33:31.062 "name": "BaseBdev3", 00:33:31.062 "uuid": "ae4149d6-c920-5296-ae36-9f43d6130776", 00:33:31.062 "is_configured": true, 00:33:31.062 "data_offset": 2048, 00:33:31.062 "data_size": 63488 00:33:31.062 } 00:33:31.062 ] 00:33:31.062 }' 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:31.062 23:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.629 23:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:31.629 23:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:31.629 [2024-12-09 23:15:12.166414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:32.564 "name": "raid_bdev1", 00:33:32.564 "uuid": "5a4481b6-aa51-496a-b0d8-ea8a5a29c358", 00:33:32.564 "strip_size_kb": 64, 00:33:32.564 "state": "online", 00:33:32.564 "raid_level": "raid0", 00:33:32.564 "superblock": true, 00:33:32.564 "num_base_bdevs": 3, 00:33:32.564 "num_base_bdevs_discovered": 3, 00:33:32.564 "num_base_bdevs_operational": 3, 00:33:32.564 "base_bdevs_list": [ 00:33:32.564 { 00:33:32.564 "name": "BaseBdev1", 00:33:32.564 "uuid": "34098be1-9176-54c8-89f0-9b7296de55a6", 00:33:32.564 "is_configured": true, 00:33:32.564 "data_offset": 2048, 00:33:32.564 "data_size": 63488 00:33:32.564 }, 00:33:32.564 { 00:33:32.564 "name": "BaseBdev2", 00:33:32.564 "uuid": "5e3971cf-ad78-52ed-900e-14980ef2a292", 00:33:32.564 "is_configured": true, 00:33:32.564 "data_offset": 2048, 00:33:32.564 "data_size": 63488 00:33:32.564 }, 00:33:32.564 { 00:33:32.564 "name": "BaseBdev3", 00:33:32.564 "uuid": "ae4149d6-c920-5296-ae36-9f43d6130776", 00:33:32.564 "is_configured": true, 00:33:32.564 "data_offset": 2048, 00:33:32.564 "data_size": 63488 00:33:32.564 } 00:33:32.564 ] 00:33:32.564 }' 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:32.564 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.130 [2024-12-09 23:15:13.500426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:33.130 [2024-12-09 23:15:13.500462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:33.130 [2024-12-09 23:15:13.503217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:33.130 [2024-12-09 23:15:13.503451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:33.130 [2024-12-09 23:15:13.503515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:33.130 [2024-12-09 23:15:13.503528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:33.130 { 00:33:33.130 "results": [ 00:33:33.130 { 00:33:33.130 "job": "raid_bdev1", 00:33:33.130 "core_mask": "0x1", 00:33:33.130 "workload": "randrw", 00:33:33.130 "percentage": 50, 00:33:33.130 "status": "finished", 00:33:33.130 "queue_depth": 1, 00:33:33.130 "io_size": 131072, 00:33:33.130 "runtime": 1.333565, 00:33:33.130 "iops": 13469.159733496304, 00:33:33.130 "mibps": 1683.644966687038, 00:33:33.130 "io_failed": 1, 00:33:33.130 "io_timeout": 0, 00:33:33.130 "avg_latency_us": 102.45971614566041, 00:33:33.130 "min_latency_us": 22.721285140562248, 00:33:33.130 "max_latency_us": 1776.578313253012 00:33:33.130 } 00:33:33.130 ], 00:33:33.130 "core_count": 1 00:33:33.130 } 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65200 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65200 ']' 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65200 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65200 00:33:33.130 killing process with pid 65200 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65200' 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65200 00:33:33.130 [2024-12-09 23:15:13.552633] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:33.130 23:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65200 00:33:33.389 [2024-12-09 23:15:13.793269] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:34.763 23:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QtyYJ5nKst 00:33:34.763 23:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:34.763 23:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:34.763 23:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:33:34.763 23:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:33:34.763 23:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:34.763 23:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:33:34.763 23:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:33:34.763 00:33:34.763 real 0m4.591s 00:33:34.763 user 0m5.394s 00:33:34.763 sys 0m0.618s 00:33:34.763 23:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.763 ************************************ 00:33:34.763 23:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.763 END TEST raid_read_error_test 00:33:34.763 ************************************ 00:33:34.763 23:15:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:33:34.763 23:15:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:34.763 23:15:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:34.763 23:15:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:34.763 ************************************ 00:33:34.763 START TEST raid_write_error_test 00:33:34.763 ************************************ 00:33:34.763 23:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:33:34.763 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:33:34.763 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:33:34.763 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:33:34.763 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:33:34.763 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:34.763 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:33:34.763 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QwWAwTXRL5 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65346 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65346 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65346 ']' 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.764 23:15:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.764 [2024-12-09 23:15:15.221986] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:33:34.764 [2024-12-09 23:15:15.222305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65346 ] 00:33:35.022 [2024-12-09 23:15:15.402992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.022 [2024-12-09 23:15:15.524907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.279 [2024-12-09 23:15:15.745975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:35.279 [2024-12-09 23:15:15.746044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.538 BaseBdev1_malloc 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.538 true 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.538 [2024-12-09 23:15:16.135407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:33:35.538 [2024-12-09 23:15:16.135464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:35.538 [2024-12-09 23:15:16.135489] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:35.538 [2024-12-09 23:15:16.135504] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:35.538 [2024-12-09 23:15:16.137941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:35.538 [2024-12-09 23:15:16.137986] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:35.538 BaseBdev1 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.538 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.796 BaseBdev2_malloc 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.796 true 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.796 [2024-12-09 23:15:16.205615] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:35.796 [2024-12-09 23:15:16.205680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:35.796 [2024-12-09 23:15:16.205701] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:35.796 [2024-12-09 23:15:16.205715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:35.796 [2024-12-09 23:15:16.208162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:35.796 [2024-12-09 23:15:16.208208] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:35.796 BaseBdev2 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.796 BaseBdev3_malloc 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.796 true 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.796 [2024-12-09 23:15:16.284734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:33:35.796 [2024-12-09 23:15:16.284926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:35.796 [2024-12-09 23:15:16.284958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:35.796 [2024-12-09 23:15:16.284974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:35.796 [2024-12-09 23:15:16.287445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:35.796 [2024-12-09 23:15:16.287483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:35.796 BaseBdev3 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:33:35.796 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.797 [2024-12-09 23:15:16.296801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:35.797 [2024-12-09 23:15:16.298939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:35.797 [2024-12-09 23:15:16.299146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:35.797 [2024-12-09 23:15:16.299368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:35.797 [2024-12-09 23:15:16.299385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:35.797 [2024-12-09 23:15:16.299672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:33:35.797 [2024-12-09 23:15:16.299822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:35.797 [2024-12-09 23:15:16.299839] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:35.797 [2024-12-09 23:15:16.299996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:35.797 "name": "raid_bdev1", 00:33:35.797 "uuid": "51dada41-0ea4-4553-b741-ddcc40db5187", 00:33:35.797 "strip_size_kb": 64, 00:33:35.797 "state": "online", 00:33:35.797 "raid_level": "raid0", 00:33:35.797 "superblock": true, 00:33:35.797 "num_base_bdevs": 3, 00:33:35.797 "num_base_bdevs_discovered": 3, 00:33:35.797 "num_base_bdevs_operational": 3, 00:33:35.797 "base_bdevs_list": [ 00:33:35.797 { 00:33:35.797 "name": "BaseBdev1", 00:33:35.797 "uuid": "7abf81e1-b28e-59dc-8ffe-8e76eb538c54", 00:33:35.797 "is_configured": true, 00:33:35.797 "data_offset": 2048, 00:33:35.797 "data_size": 63488 00:33:35.797 }, 00:33:35.797 { 00:33:35.797 "name": "BaseBdev2", 00:33:35.797 "uuid": "8ed20e08-ba9f-579c-8d66-362dda035718", 00:33:35.797 "is_configured": true, 00:33:35.797 "data_offset": 2048, 00:33:35.797 "data_size": 63488 00:33:35.797 }, 00:33:35.797 { 00:33:35.797 "name": "BaseBdev3", 00:33:35.797 "uuid": "a34089cd-fee9-5919-a516-b363240a2f02", 00:33:35.797 "is_configured": true, 00:33:35.797 "data_offset": 2048, 00:33:35.797 "data_size": 63488 00:33:35.797 } 00:33:35.797 ] 00:33:35.797 }' 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:35.797 23:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:36.364 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:36.364 23:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:36.364 [2024-12-09 23:15:16.825422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:37.335 "name": "raid_bdev1", 00:33:37.335 "uuid": "51dada41-0ea4-4553-b741-ddcc40db5187", 00:33:37.335 "strip_size_kb": 64, 00:33:37.335 "state": "online", 00:33:37.335 "raid_level": "raid0", 00:33:37.335 "superblock": true, 00:33:37.335 "num_base_bdevs": 3, 00:33:37.335 "num_base_bdevs_discovered": 3, 00:33:37.335 "num_base_bdevs_operational": 3, 00:33:37.335 "base_bdevs_list": [ 00:33:37.335 { 00:33:37.335 "name": "BaseBdev1", 00:33:37.335 "uuid": "7abf81e1-b28e-59dc-8ffe-8e76eb538c54", 00:33:37.335 "is_configured": true, 00:33:37.335 "data_offset": 2048, 00:33:37.335 "data_size": 63488 00:33:37.335 }, 00:33:37.335 { 00:33:37.335 "name": "BaseBdev2", 00:33:37.335 "uuid": "8ed20e08-ba9f-579c-8d66-362dda035718", 00:33:37.335 "is_configured": true, 00:33:37.335 "data_offset": 2048, 00:33:37.335 "data_size": 63488 00:33:37.335 }, 00:33:37.335 { 00:33:37.335 "name": "BaseBdev3", 00:33:37.335 "uuid": "a34089cd-fee9-5919-a516-b363240a2f02", 00:33:37.335 "is_configured": true, 00:33:37.335 "data_offset": 2048, 00:33:37.335 "data_size": 63488 00:33:37.335 } 00:33:37.335 ] 00:33:37.335 }' 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:37.335 23:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:37.593 23:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:37.593 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.593 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:37.593 [2024-12-09 23:15:18.188731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:37.593 [2024-12-09 23:15:18.188919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:37.593 [2024-12-09 23:15:18.192083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:37.593 [2024-12-09 23:15:18.192270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:37.593 [2024-12-09 23:15:18.192356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:37.593 [2024-12-09 23:15:18.192478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:37.593 { 00:33:37.593 "results": [ 00:33:37.593 { 00:33:37.593 "job": "raid_bdev1", 00:33:37.593 "core_mask": "0x1", 00:33:37.593 "workload": "randrw", 00:33:37.593 "percentage": 50, 00:33:37.593 "status": "finished", 00:33:37.593 "queue_depth": 1, 00:33:37.593 "io_size": 131072, 00:33:37.593 "runtime": 1.363586, 00:33:37.593 "iops": 15302.298498224534, 00:33:37.593 "mibps": 1912.7873122780668, 00:33:37.593 "io_failed": 1, 00:33:37.593 "io_timeout": 0, 00:33:37.593 "avg_latency_us": 90.22685199031618, 00:33:37.593 "min_latency_us": 27.553413654618474, 00:33:37.593 "max_latency_us": 1500.2216867469879 00:33:37.593 } 00:33:37.593 ], 00:33:37.593 "core_count": 1 00:33:37.593 } 00:33:37.593 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.593 23:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65346 00:33:37.593 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65346 ']' 00:33:37.593 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65346 00:33:37.593 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:33:37.593 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.593 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65346 00:33:37.851 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:37.851 killing process with pid 65346 00:33:37.851 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:37.851 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65346' 00:33:37.851 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65346 00:33:37.851 23:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65346 00:33:37.851 [2024-12-09 23:15:18.243028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:37.851 [2024-12-09 23:15:18.482263] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:39.225 23:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QwWAwTXRL5 00:33:39.225 23:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:39.225 23:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:39.225 23:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:33:39.225 23:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:33:39.225 23:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:39.225 23:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:33:39.225 23:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:33:39.225 00:33:39.225 real 0m4.608s 00:33:39.225 user 0m5.396s 00:33:39.225 sys 0m0.653s 00:33:39.225 ************************************ 00:33:39.225 END TEST raid_write_error_test 00:33:39.225 ************************************ 00:33:39.225 23:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.225 23:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.225 23:15:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:33:39.225 23:15:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:33:39.225 23:15:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:39.225 23:15:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.225 23:15:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:39.225 ************************************ 00:33:39.225 START TEST raid_state_function_test 00:33:39.225 ************************************ 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65489 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:39.225 Process raid pid: 65489 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65489' 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65489 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65489 ']' 00:33:39.225 23:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.226 23:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.226 23:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.226 23:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.226 23:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.483 [2024-12-09 23:15:19.904342] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:33:39.483 [2024-12-09 23:15:19.904894] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.483 [2024-12-09 23:15:20.092197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.742 [2024-12-09 23:15:20.242963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.000 [2024-12-09 23:15:20.502427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:40.000 [2024-12-09 23:15:20.502476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.260 [2024-12-09 23:15:20.819398] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:40.260 [2024-12-09 23:15:20.819633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:40.260 [2024-12-09 23:15:20.819659] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:40.260 [2024-12-09 23:15:20.819674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:40.260 [2024-12-09 23:15:20.819683] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:40.260 [2024-12-09 23:15:20.819696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:40.260 "name": "Existed_Raid", 00:33:40.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.260 "strip_size_kb": 64, 00:33:40.260 "state": "configuring", 00:33:40.260 "raid_level": "concat", 00:33:40.260 "superblock": false, 00:33:40.260 "num_base_bdevs": 3, 00:33:40.260 "num_base_bdevs_discovered": 0, 00:33:40.260 "num_base_bdevs_operational": 3, 00:33:40.260 "base_bdevs_list": [ 00:33:40.260 { 00:33:40.260 "name": "BaseBdev1", 00:33:40.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.260 "is_configured": false, 00:33:40.260 "data_offset": 0, 00:33:40.260 "data_size": 0 00:33:40.260 }, 00:33:40.260 { 00:33:40.260 "name": "BaseBdev2", 00:33:40.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.260 "is_configured": false, 00:33:40.260 "data_offset": 0, 00:33:40.260 "data_size": 0 00:33:40.260 }, 00:33:40.260 { 00:33:40.260 "name": "BaseBdev3", 00:33:40.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.260 "is_configured": false, 00:33:40.260 "data_offset": 0, 00:33:40.260 "data_size": 0 00:33:40.260 } 00:33:40.260 ] 00:33:40.260 }' 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:40.260 23:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.878 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:40.878 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.878 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.878 [2024-12-09 23:15:21.210795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:40.878 [2024-12-09 23:15:21.210998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:40.878 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.878 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:40.878 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.878 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.878 [2024-12-09 23:15:21.222784] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:40.878 [2024-12-09 23:15:21.222837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:40.878 [2024-12-09 23:15:21.222848] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:40.878 [2024-12-09 23:15:21.222861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:40.879 [2024-12-09 23:15:21.222869] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:40.879 [2024-12-09 23:15:21.222881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.879 [2024-12-09 23:15:21.268588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:40.879 BaseBdev1 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.879 [ 00:33:40.879 { 00:33:40.879 "name": "BaseBdev1", 00:33:40.879 "aliases": [ 00:33:40.879 "f1c83358-b04e-4d33-951c-3ca74b321177" 00:33:40.879 ], 00:33:40.879 "product_name": "Malloc disk", 00:33:40.879 "block_size": 512, 00:33:40.879 "num_blocks": 65536, 00:33:40.879 "uuid": "f1c83358-b04e-4d33-951c-3ca74b321177", 00:33:40.879 "assigned_rate_limits": { 00:33:40.879 "rw_ios_per_sec": 0, 00:33:40.879 "rw_mbytes_per_sec": 0, 00:33:40.879 "r_mbytes_per_sec": 0, 00:33:40.879 "w_mbytes_per_sec": 0 00:33:40.879 }, 00:33:40.879 "claimed": true, 00:33:40.879 "claim_type": "exclusive_write", 00:33:40.879 "zoned": false, 00:33:40.879 "supported_io_types": { 00:33:40.879 "read": true, 00:33:40.879 "write": true, 00:33:40.879 "unmap": true, 00:33:40.879 "flush": true, 00:33:40.879 "reset": true, 00:33:40.879 "nvme_admin": false, 00:33:40.879 "nvme_io": false, 00:33:40.879 "nvme_io_md": false, 00:33:40.879 "write_zeroes": true, 00:33:40.879 "zcopy": true, 00:33:40.879 "get_zone_info": false, 00:33:40.879 "zone_management": false, 00:33:40.879 "zone_append": false, 00:33:40.879 "compare": false, 00:33:40.879 "compare_and_write": false, 00:33:40.879 "abort": true, 00:33:40.879 "seek_hole": false, 00:33:40.879 "seek_data": false, 00:33:40.879 "copy": true, 00:33:40.879 "nvme_iov_md": false 00:33:40.879 }, 00:33:40.879 "memory_domains": [ 00:33:40.879 { 00:33:40.879 "dma_device_id": "system", 00:33:40.879 "dma_device_type": 1 00:33:40.879 }, 00:33:40.879 { 00:33:40.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:40.879 "dma_device_type": 2 00:33:40.879 } 00:33:40.879 ], 00:33:40.879 "driver_specific": {} 00:33:40.879 } 00:33:40.879 ] 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:40.879 "name": "Existed_Raid", 00:33:40.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.879 "strip_size_kb": 64, 00:33:40.879 "state": "configuring", 00:33:40.879 "raid_level": "concat", 00:33:40.879 "superblock": false, 00:33:40.879 "num_base_bdevs": 3, 00:33:40.879 "num_base_bdevs_discovered": 1, 00:33:40.879 "num_base_bdevs_operational": 3, 00:33:40.879 "base_bdevs_list": [ 00:33:40.879 { 00:33:40.879 "name": "BaseBdev1", 00:33:40.879 "uuid": "f1c83358-b04e-4d33-951c-3ca74b321177", 00:33:40.879 "is_configured": true, 00:33:40.879 "data_offset": 0, 00:33:40.879 "data_size": 65536 00:33:40.879 }, 00:33:40.879 { 00:33:40.879 "name": "BaseBdev2", 00:33:40.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.879 "is_configured": false, 00:33:40.879 "data_offset": 0, 00:33:40.879 "data_size": 0 00:33:40.879 }, 00:33:40.879 { 00:33:40.879 "name": "BaseBdev3", 00:33:40.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.879 "is_configured": false, 00:33:40.879 "data_offset": 0, 00:33:40.879 "data_size": 0 00:33:40.879 } 00:33:40.879 ] 00:33:40.879 }' 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:40.879 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.150 [2024-12-09 23:15:21.712071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:41.150 [2024-12-09 23:15:21.712267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.150 [2024-12-09 23:15:21.724129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:41.150 [2024-12-09 23:15:21.726412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:41.150 [2024-12-09 23:15:21.726590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:41.150 [2024-12-09 23:15:21.726614] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:41.150 [2024-12-09 23:15:21.726639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:41.150 "name": "Existed_Raid", 00:33:41.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.150 "strip_size_kb": 64, 00:33:41.150 "state": "configuring", 00:33:41.150 "raid_level": "concat", 00:33:41.150 "superblock": false, 00:33:41.150 "num_base_bdevs": 3, 00:33:41.150 "num_base_bdevs_discovered": 1, 00:33:41.150 "num_base_bdevs_operational": 3, 00:33:41.150 "base_bdevs_list": [ 00:33:41.150 { 00:33:41.150 "name": "BaseBdev1", 00:33:41.150 "uuid": "f1c83358-b04e-4d33-951c-3ca74b321177", 00:33:41.150 "is_configured": true, 00:33:41.150 "data_offset": 0, 00:33:41.150 "data_size": 65536 00:33:41.150 }, 00:33:41.150 { 00:33:41.150 "name": "BaseBdev2", 00:33:41.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.150 "is_configured": false, 00:33:41.150 "data_offset": 0, 00:33:41.150 "data_size": 0 00:33:41.150 }, 00:33:41.150 { 00:33:41.150 "name": "BaseBdev3", 00:33:41.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.150 "is_configured": false, 00:33:41.150 "data_offset": 0, 00:33:41.150 "data_size": 0 00:33:41.150 } 00:33:41.150 ] 00:33:41.150 }' 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:41.150 23:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.716 [2024-12-09 23:15:22.225082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:41.716 BaseBdev2 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.716 [ 00:33:41.716 { 00:33:41.716 "name": "BaseBdev2", 00:33:41.716 "aliases": [ 00:33:41.716 "8da3f630-d54b-47d9-9b09-51eea882bd2d" 00:33:41.716 ], 00:33:41.716 "product_name": "Malloc disk", 00:33:41.716 "block_size": 512, 00:33:41.716 "num_blocks": 65536, 00:33:41.716 "uuid": "8da3f630-d54b-47d9-9b09-51eea882bd2d", 00:33:41.716 "assigned_rate_limits": { 00:33:41.716 "rw_ios_per_sec": 0, 00:33:41.716 "rw_mbytes_per_sec": 0, 00:33:41.716 "r_mbytes_per_sec": 0, 00:33:41.716 "w_mbytes_per_sec": 0 00:33:41.716 }, 00:33:41.716 "claimed": true, 00:33:41.716 "claim_type": "exclusive_write", 00:33:41.716 "zoned": false, 00:33:41.716 "supported_io_types": { 00:33:41.716 "read": true, 00:33:41.716 "write": true, 00:33:41.716 "unmap": true, 00:33:41.716 "flush": true, 00:33:41.716 "reset": true, 00:33:41.716 "nvme_admin": false, 00:33:41.716 "nvme_io": false, 00:33:41.716 "nvme_io_md": false, 00:33:41.716 "write_zeroes": true, 00:33:41.716 "zcopy": true, 00:33:41.716 "get_zone_info": false, 00:33:41.716 "zone_management": false, 00:33:41.716 "zone_append": false, 00:33:41.716 "compare": false, 00:33:41.716 "compare_and_write": false, 00:33:41.716 "abort": true, 00:33:41.716 "seek_hole": false, 00:33:41.716 "seek_data": false, 00:33:41.716 "copy": true, 00:33:41.716 "nvme_iov_md": false 00:33:41.716 }, 00:33:41.716 "memory_domains": [ 00:33:41.716 { 00:33:41.716 "dma_device_id": "system", 00:33:41.716 "dma_device_type": 1 00:33:41.716 }, 00:33:41.716 { 00:33:41.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.716 "dma_device_type": 2 00:33:41.716 } 00:33:41.716 ], 00:33:41.716 "driver_specific": {} 00:33:41.716 } 00:33:41.716 ] 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:41.716 "name": "Existed_Raid", 00:33:41.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.716 "strip_size_kb": 64, 00:33:41.716 "state": "configuring", 00:33:41.716 "raid_level": "concat", 00:33:41.716 "superblock": false, 00:33:41.716 "num_base_bdevs": 3, 00:33:41.716 "num_base_bdevs_discovered": 2, 00:33:41.716 "num_base_bdevs_operational": 3, 00:33:41.716 "base_bdevs_list": [ 00:33:41.716 { 00:33:41.716 "name": "BaseBdev1", 00:33:41.716 "uuid": "f1c83358-b04e-4d33-951c-3ca74b321177", 00:33:41.716 "is_configured": true, 00:33:41.716 "data_offset": 0, 00:33:41.716 "data_size": 65536 00:33:41.716 }, 00:33:41.716 { 00:33:41.716 "name": "BaseBdev2", 00:33:41.716 "uuid": "8da3f630-d54b-47d9-9b09-51eea882bd2d", 00:33:41.716 "is_configured": true, 00:33:41.716 "data_offset": 0, 00:33:41.716 "data_size": 65536 00:33:41.716 }, 00:33:41.716 { 00:33:41.716 "name": "BaseBdev3", 00:33:41.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.716 "is_configured": false, 00:33:41.716 "data_offset": 0, 00:33:41.716 "data_size": 0 00:33:41.716 } 00:33:41.716 ] 00:33:41.716 }' 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:41.716 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.283 [2024-12-09 23:15:22.750377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:42.283 [2024-12-09 23:15:22.750459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:42.283 [2024-12-09 23:15:22.750477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:42.283 [2024-12-09 23:15:22.750793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:42.283 [2024-12-09 23:15:22.750978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:42.283 [2024-12-09 23:15:22.750999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:42.283 [2024-12-09 23:15:22.751285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:42.283 BaseBdev3 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.283 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.284 [ 00:33:42.284 { 00:33:42.284 "name": "BaseBdev3", 00:33:42.284 "aliases": [ 00:33:42.284 "b4a45451-282b-4a62-8302-9cadd589547c" 00:33:42.284 ], 00:33:42.284 "product_name": "Malloc disk", 00:33:42.284 "block_size": 512, 00:33:42.284 "num_blocks": 65536, 00:33:42.284 "uuid": "b4a45451-282b-4a62-8302-9cadd589547c", 00:33:42.284 "assigned_rate_limits": { 00:33:42.284 "rw_ios_per_sec": 0, 00:33:42.284 "rw_mbytes_per_sec": 0, 00:33:42.284 "r_mbytes_per_sec": 0, 00:33:42.284 "w_mbytes_per_sec": 0 00:33:42.284 }, 00:33:42.284 "claimed": true, 00:33:42.284 "claim_type": "exclusive_write", 00:33:42.284 "zoned": false, 00:33:42.284 "supported_io_types": { 00:33:42.284 "read": true, 00:33:42.284 "write": true, 00:33:42.284 "unmap": true, 00:33:42.284 "flush": true, 00:33:42.284 "reset": true, 00:33:42.284 "nvme_admin": false, 00:33:42.284 "nvme_io": false, 00:33:42.284 "nvme_io_md": false, 00:33:42.284 "write_zeroes": true, 00:33:42.284 "zcopy": true, 00:33:42.284 "get_zone_info": false, 00:33:42.284 "zone_management": false, 00:33:42.284 "zone_append": false, 00:33:42.284 "compare": false, 00:33:42.284 "compare_and_write": false, 00:33:42.284 "abort": true, 00:33:42.284 "seek_hole": false, 00:33:42.284 "seek_data": false, 00:33:42.284 "copy": true, 00:33:42.284 "nvme_iov_md": false 00:33:42.284 }, 00:33:42.284 "memory_domains": [ 00:33:42.284 { 00:33:42.284 "dma_device_id": "system", 00:33:42.284 "dma_device_type": 1 00:33:42.284 }, 00:33:42.284 { 00:33:42.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:42.284 "dma_device_type": 2 00:33:42.284 } 00:33:42.284 ], 00:33:42.284 "driver_specific": {} 00:33:42.284 } 00:33:42.284 ] 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:42.284 "name": "Existed_Raid", 00:33:42.284 "uuid": "e9a76c1f-f692-4ab2-a05f-f9252d9f212f", 00:33:42.284 "strip_size_kb": 64, 00:33:42.284 "state": "online", 00:33:42.284 "raid_level": "concat", 00:33:42.284 "superblock": false, 00:33:42.284 "num_base_bdevs": 3, 00:33:42.284 "num_base_bdevs_discovered": 3, 00:33:42.284 "num_base_bdevs_operational": 3, 00:33:42.284 "base_bdevs_list": [ 00:33:42.284 { 00:33:42.284 "name": "BaseBdev1", 00:33:42.284 "uuid": "f1c83358-b04e-4d33-951c-3ca74b321177", 00:33:42.284 "is_configured": true, 00:33:42.284 "data_offset": 0, 00:33:42.284 "data_size": 65536 00:33:42.284 }, 00:33:42.284 { 00:33:42.284 "name": "BaseBdev2", 00:33:42.284 "uuid": "8da3f630-d54b-47d9-9b09-51eea882bd2d", 00:33:42.284 "is_configured": true, 00:33:42.284 "data_offset": 0, 00:33:42.284 "data_size": 65536 00:33:42.284 }, 00:33:42.284 { 00:33:42.284 "name": "BaseBdev3", 00:33:42.284 "uuid": "b4a45451-282b-4a62-8302-9cadd589547c", 00:33:42.284 "is_configured": true, 00:33:42.284 "data_offset": 0, 00:33:42.284 "data_size": 65536 00:33:42.284 } 00:33:42.284 ] 00:33:42.284 }' 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:42.284 23:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.853 [2024-12-09 23:15:23.278636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:42.853 "name": "Existed_Raid", 00:33:42.853 "aliases": [ 00:33:42.853 "e9a76c1f-f692-4ab2-a05f-f9252d9f212f" 00:33:42.853 ], 00:33:42.853 "product_name": "Raid Volume", 00:33:42.853 "block_size": 512, 00:33:42.853 "num_blocks": 196608, 00:33:42.853 "uuid": "e9a76c1f-f692-4ab2-a05f-f9252d9f212f", 00:33:42.853 "assigned_rate_limits": { 00:33:42.853 "rw_ios_per_sec": 0, 00:33:42.853 "rw_mbytes_per_sec": 0, 00:33:42.853 "r_mbytes_per_sec": 0, 00:33:42.853 "w_mbytes_per_sec": 0 00:33:42.853 }, 00:33:42.853 "claimed": false, 00:33:42.853 "zoned": false, 00:33:42.853 "supported_io_types": { 00:33:42.853 "read": true, 00:33:42.853 "write": true, 00:33:42.853 "unmap": true, 00:33:42.853 "flush": true, 00:33:42.853 "reset": true, 00:33:42.853 "nvme_admin": false, 00:33:42.853 "nvme_io": false, 00:33:42.853 "nvme_io_md": false, 00:33:42.853 "write_zeroes": true, 00:33:42.853 "zcopy": false, 00:33:42.853 "get_zone_info": false, 00:33:42.853 "zone_management": false, 00:33:42.853 "zone_append": false, 00:33:42.853 "compare": false, 00:33:42.853 "compare_and_write": false, 00:33:42.853 "abort": false, 00:33:42.853 "seek_hole": false, 00:33:42.853 "seek_data": false, 00:33:42.853 "copy": false, 00:33:42.853 "nvme_iov_md": false 00:33:42.853 }, 00:33:42.853 "memory_domains": [ 00:33:42.853 { 00:33:42.853 "dma_device_id": "system", 00:33:42.853 "dma_device_type": 1 00:33:42.853 }, 00:33:42.853 { 00:33:42.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:42.853 "dma_device_type": 2 00:33:42.853 }, 00:33:42.853 { 00:33:42.853 "dma_device_id": "system", 00:33:42.853 "dma_device_type": 1 00:33:42.853 }, 00:33:42.853 { 00:33:42.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:42.853 "dma_device_type": 2 00:33:42.853 }, 00:33:42.853 { 00:33:42.853 "dma_device_id": "system", 00:33:42.853 "dma_device_type": 1 00:33:42.853 }, 00:33:42.853 { 00:33:42.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:42.853 "dma_device_type": 2 00:33:42.853 } 00:33:42.853 ], 00:33:42.853 "driver_specific": { 00:33:42.853 "raid": { 00:33:42.853 "uuid": "e9a76c1f-f692-4ab2-a05f-f9252d9f212f", 00:33:42.853 "strip_size_kb": 64, 00:33:42.853 "state": "online", 00:33:42.853 "raid_level": "concat", 00:33:42.853 "superblock": false, 00:33:42.853 "num_base_bdevs": 3, 00:33:42.853 "num_base_bdevs_discovered": 3, 00:33:42.853 "num_base_bdevs_operational": 3, 00:33:42.853 "base_bdevs_list": [ 00:33:42.853 { 00:33:42.853 "name": "BaseBdev1", 00:33:42.853 "uuid": "f1c83358-b04e-4d33-951c-3ca74b321177", 00:33:42.853 "is_configured": true, 00:33:42.853 "data_offset": 0, 00:33:42.853 "data_size": 65536 00:33:42.853 }, 00:33:42.853 { 00:33:42.853 "name": "BaseBdev2", 00:33:42.853 "uuid": "8da3f630-d54b-47d9-9b09-51eea882bd2d", 00:33:42.853 "is_configured": true, 00:33:42.853 "data_offset": 0, 00:33:42.853 "data_size": 65536 00:33:42.853 }, 00:33:42.853 { 00:33:42.853 "name": "BaseBdev3", 00:33:42.853 "uuid": "b4a45451-282b-4a62-8302-9cadd589547c", 00:33:42.853 "is_configured": true, 00:33:42.853 "data_offset": 0, 00:33:42.853 "data_size": 65536 00:33:42.853 } 00:33:42.853 ] 00:33:42.853 } 00:33:42.853 } 00:33:42.853 }' 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:42.853 BaseBdev2 00:33:42.853 BaseBdev3' 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.853 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:43.117 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 [2024-12-09 23:15:23.554364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:43.118 [2024-12-09 23:15:23.554414] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:43.118 [2024-12-09 23:15:23.554478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:43.118 "name": "Existed_Raid", 00:33:43.118 "uuid": "e9a76c1f-f692-4ab2-a05f-f9252d9f212f", 00:33:43.118 "strip_size_kb": 64, 00:33:43.118 "state": "offline", 00:33:43.118 "raid_level": "concat", 00:33:43.118 "superblock": false, 00:33:43.118 "num_base_bdevs": 3, 00:33:43.118 "num_base_bdevs_discovered": 2, 00:33:43.118 "num_base_bdevs_operational": 2, 00:33:43.118 "base_bdevs_list": [ 00:33:43.118 { 00:33:43.118 "name": null, 00:33:43.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.118 "is_configured": false, 00:33:43.118 "data_offset": 0, 00:33:43.118 "data_size": 65536 00:33:43.118 }, 00:33:43.118 { 00:33:43.118 "name": "BaseBdev2", 00:33:43.118 "uuid": "8da3f630-d54b-47d9-9b09-51eea882bd2d", 00:33:43.118 "is_configured": true, 00:33:43.118 "data_offset": 0, 00:33:43.118 "data_size": 65536 00:33:43.118 }, 00:33:43.118 { 00:33:43.118 "name": "BaseBdev3", 00:33:43.118 "uuid": "b4a45451-282b-4a62-8302-9cadd589547c", 00:33:43.118 "is_configured": true, 00:33:43.118 "data_offset": 0, 00:33:43.118 "data_size": 65536 00:33:43.118 } 00:33:43.118 ] 00:33:43.118 }' 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:43.118 23:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.689 [2024-12-09 23:15:24.213302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.689 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.948 [2024-12-09 23:15:24.356361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:43.948 [2024-12-09 23:15:24.356438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.948 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.949 BaseBdev2 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.949 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.949 [ 00:33:43.949 { 00:33:43.949 "name": "BaseBdev2", 00:33:43.949 "aliases": [ 00:33:43.949 "9d3f5f1a-52f3-4884-a381-817775c84ac8" 00:33:43.949 ], 00:33:43.949 "product_name": "Malloc disk", 00:33:43.949 "block_size": 512, 00:33:43.949 "num_blocks": 65536, 00:33:43.949 "uuid": "9d3f5f1a-52f3-4884-a381-817775c84ac8", 00:33:43.949 "assigned_rate_limits": { 00:33:43.949 "rw_ios_per_sec": 0, 00:33:43.949 "rw_mbytes_per_sec": 0, 00:33:43.949 "r_mbytes_per_sec": 0, 00:33:43.949 "w_mbytes_per_sec": 0 00:33:43.949 }, 00:33:43.949 "claimed": false, 00:33:43.949 "zoned": false, 00:33:43.949 "supported_io_types": { 00:33:43.949 "read": true, 00:33:43.949 "write": true, 00:33:43.949 "unmap": true, 00:33:43.949 "flush": true, 00:33:43.949 "reset": true, 00:33:43.949 "nvme_admin": false, 00:33:43.949 "nvme_io": false, 00:33:43.949 "nvme_io_md": false, 00:33:43.949 "write_zeroes": true, 00:33:43.949 "zcopy": true, 00:33:43.949 "get_zone_info": false, 00:33:43.949 "zone_management": false, 00:33:43.949 "zone_append": false, 00:33:43.949 "compare": false, 00:33:44.209 "compare_and_write": false, 00:33:44.209 "abort": true, 00:33:44.209 "seek_hole": false, 00:33:44.209 "seek_data": false, 00:33:44.209 "copy": true, 00:33:44.209 "nvme_iov_md": false 00:33:44.209 }, 00:33:44.209 "memory_domains": [ 00:33:44.209 { 00:33:44.209 "dma_device_id": "system", 00:33:44.209 "dma_device_type": 1 00:33:44.209 }, 00:33:44.209 { 00:33:44.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:44.209 "dma_device_type": 2 00:33:44.209 } 00:33:44.209 ], 00:33:44.209 "driver_specific": {} 00:33:44.209 } 00:33:44.209 ] 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.209 BaseBdev3 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.209 [ 00:33:44.209 { 00:33:44.209 "name": "BaseBdev3", 00:33:44.209 "aliases": [ 00:33:44.209 "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9" 00:33:44.209 ], 00:33:44.209 "product_name": "Malloc disk", 00:33:44.209 "block_size": 512, 00:33:44.209 "num_blocks": 65536, 00:33:44.209 "uuid": "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9", 00:33:44.209 "assigned_rate_limits": { 00:33:44.209 "rw_ios_per_sec": 0, 00:33:44.209 "rw_mbytes_per_sec": 0, 00:33:44.209 "r_mbytes_per_sec": 0, 00:33:44.209 "w_mbytes_per_sec": 0 00:33:44.209 }, 00:33:44.209 "claimed": false, 00:33:44.209 "zoned": false, 00:33:44.209 "supported_io_types": { 00:33:44.209 "read": true, 00:33:44.209 "write": true, 00:33:44.209 "unmap": true, 00:33:44.209 "flush": true, 00:33:44.209 "reset": true, 00:33:44.209 "nvme_admin": false, 00:33:44.209 "nvme_io": false, 00:33:44.209 "nvme_io_md": false, 00:33:44.209 "write_zeroes": true, 00:33:44.209 "zcopy": true, 00:33:44.209 "get_zone_info": false, 00:33:44.209 "zone_management": false, 00:33:44.209 "zone_append": false, 00:33:44.209 "compare": false, 00:33:44.209 "compare_and_write": false, 00:33:44.209 "abort": true, 00:33:44.209 "seek_hole": false, 00:33:44.209 "seek_data": false, 00:33:44.209 "copy": true, 00:33:44.209 "nvme_iov_md": false 00:33:44.209 }, 00:33:44.209 "memory_domains": [ 00:33:44.209 { 00:33:44.209 "dma_device_id": "system", 00:33:44.209 "dma_device_type": 1 00:33:44.209 }, 00:33:44.209 { 00:33:44.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:44.209 "dma_device_type": 2 00:33:44.209 } 00:33:44.209 ], 00:33:44.209 "driver_specific": {} 00:33:44.209 } 00:33:44.209 ] 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.209 [2024-12-09 23:15:24.688374] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:44.209 [2024-12-09 23:15:24.688561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:44.209 [2024-12-09 23:15:24.688666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:44.209 [2024-12-09 23:15:24.690958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.209 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:44.210 "name": "Existed_Raid", 00:33:44.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.210 "strip_size_kb": 64, 00:33:44.210 "state": "configuring", 00:33:44.210 "raid_level": "concat", 00:33:44.210 "superblock": false, 00:33:44.210 "num_base_bdevs": 3, 00:33:44.210 "num_base_bdevs_discovered": 2, 00:33:44.210 "num_base_bdevs_operational": 3, 00:33:44.210 "base_bdevs_list": [ 00:33:44.210 { 00:33:44.210 "name": "BaseBdev1", 00:33:44.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.210 "is_configured": false, 00:33:44.210 "data_offset": 0, 00:33:44.210 "data_size": 0 00:33:44.210 }, 00:33:44.210 { 00:33:44.210 "name": "BaseBdev2", 00:33:44.210 "uuid": "9d3f5f1a-52f3-4884-a381-817775c84ac8", 00:33:44.210 "is_configured": true, 00:33:44.210 "data_offset": 0, 00:33:44.210 "data_size": 65536 00:33:44.210 }, 00:33:44.210 { 00:33:44.210 "name": "BaseBdev3", 00:33:44.210 "uuid": "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9", 00:33:44.210 "is_configured": true, 00:33:44.210 "data_offset": 0, 00:33:44.210 "data_size": 65536 00:33:44.210 } 00:33:44.210 ] 00:33:44.210 }' 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:44.210 23:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.779 [2024-12-09 23:15:25.119799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.779 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:44.779 "name": "Existed_Raid", 00:33:44.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.779 "strip_size_kb": 64, 00:33:44.779 "state": "configuring", 00:33:44.779 "raid_level": "concat", 00:33:44.779 "superblock": false, 00:33:44.779 "num_base_bdevs": 3, 00:33:44.779 "num_base_bdevs_discovered": 1, 00:33:44.780 "num_base_bdevs_operational": 3, 00:33:44.780 "base_bdevs_list": [ 00:33:44.780 { 00:33:44.780 "name": "BaseBdev1", 00:33:44.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.780 "is_configured": false, 00:33:44.780 "data_offset": 0, 00:33:44.780 "data_size": 0 00:33:44.780 }, 00:33:44.780 { 00:33:44.780 "name": null, 00:33:44.780 "uuid": "9d3f5f1a-52f3-4884-a381-817775c84ac8", 00:33:44.780 "is_configured": false, 00:33:44.780 "data_offset": 0, 00:33:44.780 "data_size": 65536 00:33:44.780 }, 00:33:44.780 { 00:33:44.780 "name": "BaseBdev3", 00:33:44.780 "uuid": "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9", 00:33:44.780 "is_configured": true, 00:33:44.780 "data_offset": 0, 00:33:44.780 "data_size": 65536 00:33:44.780 } 00:33:44.780 ] 00:33:44.780 }' 00:33:44.780 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:44.780 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.039 [2024-12-09 23:15:25.658550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:45.039 BaseBdev1 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.039 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.298 [ 00:33:45.298 { 00:33:45.298 "name": "BaseBdev1", 00:33:45.298 "aliases": [ 00:33:45.298 "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c" 00:33:45.298 ], 00:33:45.298 "product_name": "Malloc disk", 00:33:45.298 "block_size": 512, 00:33:45.298 "num_blocks": 65536, 00:33:45.298 "uuid": "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c", 00:33:45.298 "assigned_rate_limits": { 00:33:45.298 "rw_ios_per_sec": 0, 00:33:45.298 "rw_mbytes_per_sec": 0, 00:33:45.298 "r_mbytes_per_sec": 0, 00:33:45.298 "w_mbytes_per_sec": 0 00:33:45.298 }, 00:33:45.298 "claimed": true, 00:33:45.298 "claim_type": "exclusive_write", 00:33:45.298 "zoned": false, 00:33:45.298 "supported_io_types": { 00:33:45.298 "read": true, 00:33:45.298 "write": true, 00:33:45.298 "unmap": true, 00:33:45.298 "flush": true, 00:33:45.298 "reset": true, 00:33:45.298 "nvme_admin": false, 00:33:45.298 "nvme_io": false, 00:33:45.298 "nvme_io_md": false, 00:33:45.298 "write_zeroes": true, 00:33:45.298 "zcopy": true, 00:33:45.298 "get_zone_info": false, 00:33:45.298 "zone_management": false, 00:33:45.298 "zone_append": false, 00:33:45.298 "compare": false, 00:33:45.298 "compare_and_write": false, 00:33:45.298 "abort": true, 00:33:45.298 "seek_hole": false, 00:33:45.298 "seek_data": false, 00:33:45.298 "copy": true, 00:33:45.298 "nvme_iov_md": false 00:33:45.298 }, 00:33:45.298 "memory_domains": [ 00:33:45.298 { 00:33:45.298 "dma_device_id": "system", 00:33:45.298 "dma_device_type": 1 00:33:45.298 }, 00:33:45.298 { 00:33:45.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:45.299 "dma_device_type": 2 00:33:45.299 } 00:33:45.299 ], 00:33:45.299 "driver_specific": {} 00:33:45.299 } 00:33:45.299 ] 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:45.299 "name": "Existed_Raid", 00:33:45.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.299 "strip_size_kb": 64, 00:33:45.299 "state": "configuring", 00:33:45.299 "raid_level": "concat", 00:33:45.299 "superblock": false, 00:33:45.299 "num_base_bdevs": 3, 00:33:45.299 "num_base_bdevs_discovered": 2, 00:33:45.299 "num_base_bdevs_operational": 3, 00:33:45.299 "base_bdevs_list": [ 00:33:45.299 { 00:33:45.299 "name": "BaseBdev1", 00:33:45.299 "uuid": "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c", 00:33:45.299 "is_configured": true, 00:33:45.299 "data_offset": 0, 00:33:45.299 "data_size": 65536 00:33:45.299 }, 00:33:45.299 { 00:33:45.299 "name": null, 00:33:45.299 "uuid": "9d3f5f1a-52f3-4884-a381-817775c84ac8", 00:33:45.299 "is_configured": false, 00:33:45.299 "data_offset": 0, 00:33:45.299 "data_size": 65536 00:33:45.299 }, 00:33:45.299 { 00:33:45.299 "name": "BaseBdev3", 00:33:45.299 "uuid": "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9", 00:33:45.299 "is_configured": true, 00:33:45.299 "data_offset": 0, 00:33:45.299 "data_size": 65536 00:33:45.299 } 00:33:45.299 ] 00:33:45.299 }' 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:45.299 23:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.558 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.558 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:45.558 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.558 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.558 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.817 [2024-12-09 23:15:26.202310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.817 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:45.817 "name": "Existed_Raid", 00:33:45.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.817 "strip_size_kb": 64, 00:33:45.817 "state": "configuring", 00:33:45.817 "raid_level": "concat", 00:33:45.817 "superblock": false, 00:33:45.817 "num_base_bdevs": 3, 00:33:45.817 "num_base_bdevs_discovered": 1, 00:33:45.817 "num_base_bdevs_operational": 3, 00:33:45.817 "base_bdevs_list": [ 00:33:45.817 { 00:33:45.817 "name": "BaseBdev1", 00:33:45.817 "uuid": "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c", 00:33:45.817 "is_configured": true, 00:33:45.817 "data_offset": 0, 00:33:45.817 "data_size": 65536 00:33:45.817 }, 00:33:45.817 { 00:33:45.817 "name": null, 00:33:45.817 "uuid": "9d3f5f1a-52f3-4884-a381-817775c84ac8", 00:33:45.817 "is_configured": false, 00:33:45.817 "data_offset": 0, 00:33:45.817 "data_size": 65536 00:33:45.817 }, 00:33:45.817 { 00:33:45.817 "name": null, 00:33:45.817 "uuid": "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9", 00:33:45.818 "is_configured": false, 00:33:45.818 "data_offset": 0, 00:33:45.818 "data_size": 65536 00:33:45.818 } 00:33:45.818 ] 00:33:45.818 }' 00:33:45.818 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:45.818 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.077 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.077 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.077 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.077 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:46.077 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.337 [2024-12-09 23:15:26.721883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:46.337 "name": "Existed_Raid", 00:33:46.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.337 "strip_size_kb": 64, 00:33:46.337 "state": "configuring", 00:33:46.337 "raid_level": "concat", 00:33:46.337 "superblock": false, 00:33:46.337 "num_base_bdevs": 3, 00:33:46.337 "num_base_bdevs_discovered": 2, 00:33:46.337 "num_base_bdevs_operational": 3, 00:33:46.337 "base_bdevs_list": [ 00:33:46.337 { 00:33:46.337 "name": "BaseBdev1", 00:33:46.337 "uuid": "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c", 00:33:46.337 "is_configured": true, 00:33:46.337 "data_offset": 0, 00:33:46.337 "data_size": 65536 00:33:46.337 }, 00:33:46.337 { 00:33:46.337 "name": null, 00:33:46.337 "uuid": "9d3f5f1a-52f3-4884-a381-817775c84ac8", 00:33:46.337 "is_configured": false, 00:33:46.337 "data_offset": 0, 00:33:46.337 "data_size": 65536 00:33:46.337 }, 00:33:46.337 { 00:33:46.337 "name": "BaseBdev3", 00:33:46.337 "uuid": "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9", 00:33:46.337 "is_configured": true, 00:33:46.337 "data_offset": 0, 00:33:46.337 "data_size": 65536 00:33:46.337 } 00:33:46.337 ] 00:33:46.337 }' 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:46.337 23:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.596 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.596 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.596 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:46.596 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.596 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.596 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:46.596 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:46.596 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.596 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.596 [2024-12-09 23:15:27.217237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:46.854 "name": "Existed_Raid", 00:33:46.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.854 "strip_size_kb": 64, 00:33:46.854 "state": "configuring", 00:33:46.854 "raid_level": "concat", 00:33:46.854 "superblock": false, 00:33:46.854 "num_base_bdevs": 3, 00:33:46.854 "num_base_bdevs_discovered": 1, 00:33:46.854 "num_base_bdevs_operational": 3, 00:33:46.854 "base_bdevs_list": [ 00:33:46.854 { 00:33:46.854 "name": null, 00:33:46.854 "uuid": "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c", 00:33:46.854 "is_configured": false, 00:33:46.854 "data_offset": 0, 00:33:46.854 "data_size": 65536 00:33:46.854 }, 00:33:46.854 { 00:33:46.854 "name": null, 00:33:46.854 "uuid": "9d3f5f1a-52f3-4884-a381-817775c84ac8", 00:33:46.854 "is_configured": false, 00:33:46.854 "data_offset": 0, 00:33:46.854 "data_size": 65536 00:33:46.854 }, 00:33:46.854 { 00:33:46.854 "name": "BaseBdev3", 00:33:46.854 "uuid": "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9", 00:33:46.854 "is_configured": true, 00:33:46.854 "data_offset": 0, 00:33:46.854 "data_size": 65536 00:33:46.854 } 00:33:46.854 ] 00:33:46.854 }' 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:46.854 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.110 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:47.110 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.110 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.110 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.369 [2024-12-09 23:15:27.782333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:47.369 "name": "Existed_Raid", 00:33:47.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.369 "strip_size_kb": 64, 00:33:47.369 "state": "configuring", 00:33:47.369 "raid_level": "concat", 00:33:47.369 "superblock": false, 00:33:47.369 "num_base_bdevs": 3, 00:33:47.369 "num_base_bdevs_discovered": 2, 00:33:47.369 "num_base_bdevs_operational": 3, 00:33:47.369 "base_bdevs_list": [ 00:33:47.369 { 00:33:47.369 "name": null, 00:33:47.369 "uuid": "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c", 00:33:47.369 "is_configured": false, 00:33:47.369 "data_offset": 0, 00:33:47.369 "data_size": 65536 00:33:47.369 }, 00:33:47.369 { 00:33:47.369 "name": "BaseBdev2", 00:33:47.369 "uuid": "9d3f5f1a-52f3-4884-a381-817775c84ac8", 00:33:47.369 "is_configured": true, 00:33:47.369 "data_offset": 0, 00:33:47.369 "data_size": 65536 00:33:47.369 }, 00:33:47.369 { 00:33:47.369 "name": "BaseBdev3", 00:33:47.369 "uuid": "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9", 00:33:47.369 "is_configured": true, 00:33:47.369 "data_offset": 0, 00:33:47.369 "data_size": 65536 00:33:47.369 } 00:33:47.369 ] 00:33:47.369 }' 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:47.369 23:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.628 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.628 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.628 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.628 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:47.628 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.628 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:47.628 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.628 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:47.628 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.628 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.887 [2024-12-09 23:15:28.345298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:47.887 [2024-12-09 23:15:28.345357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:47.887 [2024-12-09 23:15:28.345369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:47.887 [2024-12-09 23:15:28.345663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:47.887 [2024-12-09 23:15:28.345832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:47.887 [2024-12-09 23:15:28.345844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:33:47.887 [2024-12-09 23:15:28.346139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:47.887 NewBaseBdev 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.887 [ 00:33:47.887 { 00:33:47.887 "name": "NewBaseBdev", 00:33:47.887 "aliases": [ 00:33:47.887 "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c" 00:33:47.887 ], 00:33:47.887 "product_name": "Malloc disk", 00:33:47.887 "block_size": 512, 00:33:47.887 "num_blocks": 65536, 00:33:47.887 "uuid": "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c", 00:33:47.887 "assigned_rate_limits": { 00:33:47.887 "rw_ios_per_sec": 0, 00:33:47.887 "rw_mbytes_per_sec": 0, 00:33:47.887 "r_mbytes_per_sec": 0, 00:33:47.887 "w_mbytes_per_sec": 0 00:33:47.887 }, 00:33:47.887 "claimed": true, 00:33:47.887 "claim_type": "exclusive_write", 00:33:47.887 "zoned": false, 00:33:47.887 "supported_io_types": { 00:33:47.887 "read": true, 00:33:47.887 "write": true, 00:33:47.887 "unmap": true, 00:33:47.887 "flush": true, 00:33:47.887 "reset": true, 00:33:47.887 "nvme_admin": false, 00:33:47.887 "nvme_io": false, 00:33:47.887 "nvme_io_md": false, 00:33:47.887 "write_zeroes": true, 00:33:47.887 "zcopy": true, 00:33:47.887 "get_zone_info": false, 00:33:47.887 "zone_management": false, 00:33:47.887 "zone_append": false, 00:33:47.887 "compare": false, 00:33:47.887 "compare_and_write": false, 00:33:47.887 "abort": true, 00:33:47.887 "seek_hole": false, 00:33:47.887 "seek_data": false, 00:33:47.887 "copy": true, 00:33:47.887 "nvme_iov_md": false 00:33:47.887 }, 00:33:47.887 "memory_domains": [ 00:33:47.887 { 00:33:47.887 "dma_device_id": "system", 00:33:47.887 "dma_device_type": 1 00:33:47.887 }, 00:33:47.887 { 00:33:47.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.887 "dma_device_type": 2 00:33:47.887 } 00:33:47.887 ], 00:33:47.887 "driver_specific": {} 00:33:47.887 } 00:33:47.887 ] 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:47.887 "name": "Existed_Raid", 00:33:47.887 "uuid": "c8d4da0e-f4be-4d21-a972-d8984b1f393d", 00:33:47.887 "strip_size_kb": 64, 00:33:47.887 "state": "online", 00:33:47.887 "raid_level": "concat", 00:33:47.887 "superblock": false, 00:33:47.887 "num_base_bdevs": 3, 00:33:47.887 "num_base_bdevs_discovered": 3, 00:33:47.887 "num_base_bdevs_operational": 3, 00:33:47.887 "base_bdevs_list": [ 00:33:47.887 { 00:33:47.887 "name": "NewBaseBdev", 00:33:47.887 "uuid": "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c", 00:33:47.887 "is_configured": true, 00:33:47.887 "data_offset": 0, 00:33:47.887 "data_size": 65536 00:33:47.887 }, 00:33:47.887 { 00:33:47.887 "name": "BaseBdev2", 00:33:47.887 "uuid": "9d3f5f1a-52f3-4884-a381-817775c84ac8", 00:33:47.887 "is_configured": true, 00:33:47.887 "data_offset": 0, 00:33:47.887 "data_size": 65536 00:33:47.887 }, 00:33:47.887 { 00:33:47.887 "name": "BaseBdev3", 00:33:47.887 "uuid": "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9", 00:33:47.887 "is_configured": true, 00:33:47.887 "data_offset": 0, 00:33:47.887 "data_size": 65536 00:33:47.887 } 00:33:47.887 ] 00:33:47.887 }' 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:47.887 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.454 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:48.454 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:48.454 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:48.454 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:48.454 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:48.454 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:48.454 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:48.454 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.455 [2024-12-09 23:15:28.840970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:48.455 "name": "Existed_Raid", 00:33:48.455 "aliases": [ 00:33:48.455 "c8d4da0e-f4be-4d21-a972-d8984b1f393d" 00:33:48.455 ], 00:33:48.455 "product_name": "Raid Volume", 00:33:48.455 "block_size": 512, 00:33:48.455 "num_blocks": 196608, 00:33:48.455 "uuid": "c8d4da0e-f4be-4d21-a972-d8984b1f393d", 00:33:48.455 "assigned_rate_limits": { 00:33:48.455 "rw_ios_per_sec": 0, 00:33:48.455 "rw_mbytes_per_sec": 0, 00:33:48.455 "r_mbytes_per_sec": 0, 00:33:48.455 "w_mbytes_per_sec": 0 00:33:48.455 }, 00:33:48.455 "claimed": false, 00:33:48.455 "zoned": false, 00:33:48.455 "supported_io_types": { 00:33:48.455 "read": true, 00:33:48.455 "write": true, 00:33:48.455 "unmap": true, 00:33:48.455 "flush": true, 00:33:48.455 "reset": true, 00:33:48.455 "nvme_admin": false, 00:33:48.455 "nvme_io": false, 00:33:48.455 "nvme_io_md": false, 00:33:48.455 "write_zeroes": true, 00:33:48.455 "zcopy": false, 00:33:48.455 "get_zone_info": false, 00:33:48.455 "zone_management": false, 00:33:48.455 "zone_append": false, 00:33:48.455 "compare": false, 00:33:48.455 "compare_and_write": false, 00:33:48.455 "abort": false, 00:33:48.455 "seek_hole": false, 00:33:48.455 "seek_data": false, 00:33:48.455 "copy": false, 00:33:48.455 "nvme_iov_md": false 00:33:48.455 }, 00:33:48.455 "memory_domains": [ 00:33:48.455 { 00:33:48.455 "dma_device_id": "system", 00:33:48.455 "dma_device_type": 1 00:33:48.455 }, 00:33:48.455 { 00:33:48.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:48.455 "dma_device_type": 2 00:33:48.455 }, 00:33:48.455 { 00:33:48.455 "dma_device_id": "system", 00:33:48.455 "dma_device_type": 1 00:33:48.455 }, 00:33:48.455 { 00:33:48.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:48.455 "dma_device_type": 2 00:33:48.455 }, 00:33:48.455 { 00:33:48.455 "dma_device_id": "system", 00:33:48.455 "dma_device_type": 1 00:33:48.455 }, 00:33:48.455 { 00:33:48.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:48.455 "dma_device_type": 2 00:33:48.455 } 00:33:48.455 ], 00:33:48.455 "driver_specific": { 00:33:48.455 "raid": { 00:33:48.455 "uuid": "c8d4da0e-f4be-4d21-a972-d8984b1f393d", 00:33:48.455 "strip_size_kb": 64, 00:33:48.455 "state": "online", 00:33:48.455 "raid_level": "concat", 00:33:48.455 "superblock": false, 00:33:48.455 "num_base_bdevs": 3, 00:33:48.455 "num_base_bdevs_discovered": 3, 00:33:48.455 "num_base_bdevs_operational": 3, 00:33:48.455 "base_bdevs_list": [ 00:33:48.455 { 00:33:48.455 "name": "NewBaseBdev", 00:33:48.455 "uuid": "7cdb9427-f9b3-4c5f-bbdf-24bedd6aaa9c", 00:33:48.455 "is_configured": true, 00:33:48.455 "data_offset": 0, 00:33:48.455 "data_size": 65536 00:33:48.455 }, 00:33:48.455 { 00:33:48.455 "name": "BaseBdev2", 00:33:48.455 "uuid": "9d3f5f1a-52f3-4884-a381-817775c84ac8", 00:33:48.455 "is_configured": true, 00:33:48.455 "data_offset": 0, 00:33:48.455 "data_size": 65536 00:33:48.455 }, 00:33:48.455 { 00:33:48.455 "name": "BaseBdev3", 00:33:48.455 "uuid": "2df89a9d-ce41-4897-8b4c-cc7dbd6f06a9", 00:33:48.455 "is_configured": true, 00:33:48.455 "data_offset": 0, 00:33:48.455 "data_size": 65536 00:33:48.455 } 00:33:48.455 ] 00:33:48.455 } 00:33:48.455 } 00:33:48.455 }' 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:48.455 BaseBdev2 00:33:48.455 BaseBdev3' 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:48.455 23:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.455 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.713 [2024-12-09 23:15:29.112317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:48.713 [2024-12-09 23:15:29.112352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:48.713 [2024-12-09 23:15:29.112461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:48.713 [2024-12-09 23:15:29.112522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:48.713 [2024-12-09 23:15:29.112536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65489 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65489 ']' 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65489 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65489 00:33:48.713 killing process with pid 65489 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65489' 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65489 00:33:48.713 23:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65489 00:33:48.713 [2024-12-09 23:15:29.155780] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:48.972 [2024-12-09 23:15:29.471487] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:33:50.351 ************************************ 00:33:50.351 END TEST raid_state_function_test 00:33:50.351 ************************************ 00:33:50.351 00:33:50.351 real 0m10.857s 00:33:50.351 user 0m17.176s 00:33:50.351 sys 0m2.126s 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:50.351 23:15:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:33:50.351 23:15:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:50.351 23:15:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:50.351 23:15:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:50.351 ************************************ 00:33:50.351 START TEST raid_state_function_test_sb 00:33:50.351 ************************************ 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66116 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:50.351 Process raid pid: 66116 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66116' 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66116 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66116 ']' 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:50.351 23:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:50.351 [2024-12-09 23:15:30.847480] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:33:50.351 [2024-12-09 23:15:30.847834] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.610 [2024-12-09 23:15:31.031578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.610 [2024-12-09 23:15:31.162285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.869 [2024-12-09 23:15:31.390560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:50.869 [2024-12-09 23:15:31.390844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:51.128 23:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.128 23:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:33:51.128 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:51.128 23:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.128 23:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.128 [2024-12-09 23:15:31.761999] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:51.128 [2024-12-09 23:15:31.762066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:51.128 [2024-12-09 23:15:31.762079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:51.128 [2024-12-09 23:15:31.762094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:51.128 [2024-12-09 23:15:31.762102] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:51.128 [2024-12-09 23:15:31.762115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.387 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:51.387 "name": "Existed_Raid", 00:33:51.387 "uuid": "741312df-4d60-4ff1-9170-5ce99f46fb54", 00:33:51.387 "strip_size_kb": 64, 00:33:51.387 "state": "configuring", 00:33:51.387 "raid_level": "concat", 00:33:51.387 "superblock": true, 00:33:51.387 "num_base_bdevs": 3, 00:33:51.387 "num_base_bdevs_discovered": 0, 00:33:51.387 "num_base_bdevs_operational": 3, 00:33:51.387 "base_bdevs_list": [ 00:33:51.387 { 00:33:51.387 "name": "BaseBdev1", 00:33:51.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.387 "is_configured": false, 00:33:51.388 "data_offset": 0, 00:33:51.388 "data_size": 0 00:33:51.388 }, 00:33:51.388 { 00:33:51.388 "name": "BaseBdev2", 00:33:51.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.388 "is_configured": false, 00:33:51.388 "data_offset": 0, 00:33:51.388 "data_size": 0 00:33:51.388 }, 00:33:51.388 { 00:33:51.388 "name": "BaseBdev3", 00:33:51.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.388 "is_configured": false, 00:33:51.388 "data_offset": 0, 00:33:51.388 "data_size": 0 00:33:51.388 } 00:33:51.388 ] 00:33:51.388 }' 00:33:51.388 23:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:51.388 23:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.646 [2024-12-09 23:15:32.221346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:51.646 [2024-12-09 23:15:32.221404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.646 [2024-12-09 23:15:32.233342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:51.646 [2024-12-09 23:15:32.233573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:51.646 [2024-12-09 23:15:32.233601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:51.646 [2024-12-09 23:15:32.233616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:51.646 [2024-12-09 23:15:32.233625] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:51.646 [2024-12-09 23:15:32.233638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.646 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.905 [2024-12-09 23:15:32.285080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:51.905 BaseBdev1 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.905 [ 00:33:51.905 { 00:33:51.905 "name": "BaseBdev1", 00:33:51.905 "aliases": [ 00:33:51.905 "9ff86918-f36e-47a4-b93b-a6e46ebfd534" 00:33:51.905 ], 00:33:51.905 "product_name": "Malloc disk", 00:33:51.905 "block_size": 512, 00:33:51.905 "num_blocks": 65536, 00:33:51.905 "uuid": "9ff86918-f36e-47a4-b93b-a6e46ebfd534", 00:33:51.905 "assigned_rate_limits": { 00:33:51.905 "rw_ios_per_sec": 0, 00:33:51.905 "rw_mbytes_per_sec": 0, 00:33:51.905 "r_mbytes_per_sec": 0, 00:33:51.905 "w_mbytes_per_sec": 0 00:33:51.905 }, 00:33:51.905 "claimed": true, 00:33:51.905 "claim_type": "exclusive_write", 00:33:51.905 "zoned": false, 00:33:51.905 "supported_io_types": { 00:33:51.905 "read": true, 00:33:51.905 "write": true, 00:33:51.905 "unmap": true, 00:33:51.905 "flush": true, 00:33:51.905 "reset": true, 00:33:51.905 "nvme_admin": false, 00:33:51.905 "nvme_io": false, 00:33:51.905 "nvme_io_md": false, 00:33:51.905 "write_zeroes": true, 00:33:51.905 "zcopy": true, 00:33:51.905 "get_zone_info": false, 00:33:51.905 "zone_management": false, 00:33:51.905 "zone_append": false, 00:33:51.905 "compare": false, 00:33:51.905 "compare_and_write": false, 00:33:51.905 "abort": true, 00:33:51.905 "seek_hole": false, 00:33:51.905 "seek_data": false, 00:33:51.905 "copy": true, 00:33:51.905 "nvme_iov_md": false 00:33:51.905 }, 00:33:51.905 "memory_domains": [ 00:33:51.905 { 00:33:51.905 "dma_device_id": "system", 00:33:51.905 "dma_device_type": 1 00:33:51.905 }, 00:33:51.905 { 00:33:51.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:51.905 "dma_device_type": 2 00:33:51.905 } 00:33:51.905 ], 00:33:51.905 "driver_specific": {} 00:33:51.905 } 00:33:51.905 ] 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:51.905 "name": "Existed_Raid", 00:33:51.905 "uuid": "0d4f91f0-a0b1-4fa3-91db-70018ad69381", 00:33:51.905 "strip_size_kb": 64, 00:33:51.905 "state": "configuring", 00:33:51.905 "raid_level": "concat", 00:33:51.905 "superblock": true, 00:33:51.905 "num_base_bdevs": 3, 00:33:51.905 "num_base_bdevs_discovered": 1, 00:33:51.905 "num_base_bdevs_operational": 3, 00:33:51.905 "base_bdevs_list": [ 00:33:51.905 { 00:33:51.905 "name": "BaseBdev1", 00:33:51.905 "uuid": "9ff86918-f36e-47a4-b93b-a6e46ebfd534", 00:33:51.905 "is_configured": true, 00:33:51.905 "data_offset": 2048, 00:33:51.905 "data_size": 63488 00:33:51.905 }, 00:33:51.905 { 00:33:51.905 "name": "BaseBdev2", 00:33:51.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.905 "is_configured": false, 00:33:51.905 "data_offset": 0, 00:33:51.905 "data_size": 0 00:33:51.905 }, 00:33:51.905 { 00:33:51.905 "name": "BaseBdev3", 00:33:51.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.905 "is_configured": false, 00:33:51.905 "data_offset": 0, 00:33:51.905 "data_size": 0 00:33:51.905 } 00:33:51.905 ] 00:33:51.905 }' 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:51.905 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.165 [2024-12-09 23:15:32.772490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:52.165 [2024-12-09 23:15:32.772552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.165 [2024-12-09 23:15:32.784552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:52.165 [2024-12-09 23:15:32.786643] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:52.165 [2024-12-09 23:15:32.786692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:52.165 [2024-12-09 23:15:32.786703] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:52.165 [2024-12-09 23:15:32.786715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.165 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:52.424 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.424 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:52.424 "name": "Existed_Raid", 00:33:52.424 "uuid": "5c9957e1-dc62-4585-b2e8-c2bcd04c410e", 00:33:52.424 "strip_size_kb": 64, 00:33:52.424 "state": "configuring", 00:33:52.424 "raid_level": "concat", 00:33:52.424 "superblock": true, 00:33:52.424 "num_base_bdevs": 3, 00:33:52.424 "num_base_bdevs_discovered": 1, 00:33:52.424 "num_base_bdevs_operational": 3, 00:33:52.424 "base_bdevs_list": [ 00:33:52.424 { 00:33:52.424 "name": "BaseBdev1", 00:33:52.424 "uuid": "9ff86918-f36e-47a4-b93b-a6e46ebfd534", 00:33:52.424 "is_configured": true, 00:33:52.424 "data_offset": 2048, 00:33:52.424 "data_size": 63488 00:33:52.424 }, 00:33:52.424 { 00:33:52.424 "name": "BaseBdev2", 00:33:52.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:52.424 "is_configured": false, 00:33:52.424 "data_offset": 0, 00:33:52.424 "data_size": 0 00:33:52.424 }, 00:33:52.424 { 00:33:52.424 "name": "BaseBdev3", 00:33:52.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:52.424 "is_configured": false, 00:33:52.424 "data_offset": 0, 00:33:52.424 "data_size": 0 00:33:52.424 } 00:33:52.424 ] 00:33:52.424 }' 00:33:52.424 23:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:52.424 23:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.682 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:52.682 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.682 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.682 [2024-12-09 23:15:33.276027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:52.682 BaseBdev2 00:33:52.682 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.682 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:52.682 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:52.682 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:52.682 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:52.683 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:52.683 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:52.683 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:52.683 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.683 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.683 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.683 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:52.683 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.683 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.683 [ 00:33:52.683 { 00:33:52.683 "name": "BaseBdev2", 00:33:52.683 "aliases": [ 00:33:52.683 "342ce7bb-53b3-414d-9406-46d373fa66e4" 00:33:52.683 ], 00:33:52.683 "product_name": "Malloc disk", 00:33:52.683 "block_size": 512, 00:33:52.683 "num_blocks": 65536, 00:33:52.683 "uuid": "342ce7bb-53b3-414d-9406-46d373fa66e4", 00:33:52.683 "assigned_rate_limits": { 00:33:52.683 "rw_ios_per_sec": 0, 00:33:52.683 "rw_mbytes_per_sec": 0, 00:33:52.683 "r_mbytes_per_sec": 0, 00:33:52.683 "w_mbytes_per_sec": 0 00:33:52.683 }, 00:33:52.683 "claimed": true, 00:33:52.683 "claim_type": "exclusive_write", 00:33:52.683 "zoned": false, 00:33:52.683 "supported_io_types": { 00:33:52.683 "read": true, 00:33:52.683 "write": true, 00:33:52.683 "unmap": true, 00:33:52.683 "flush": true, 00:33:52.683 "reset": true, 00:33:52.683 "nvme_admin": false, 00:33:52.683 "nvme_io": false, 00:33:52.683 "nvme_io_md": false, 00:33:52.683 "write_zeroes": true, 00:33:52.683 "zcopy": true, 00:33:52.941 "get_zone_info": false, 00:33:52.941 "zone_management": false, 00:33:52.941 "zone_append": false, 00:33:52.941 "compare": false, 00:33:52.941 "compare_and_write": false, 00:33:52.941 "abort": true, 00:33:52.941 "seek_hole": false, 00:33:52.941 "seek_data": false, 00:33:52.941 "copy": true, 00:33:52.941 "nvme_iov_md": false 00:33:52.941 }, 00:33:52.941 "memory_domains": [ 00:33:52.941 { 00:33:52.941 "dma_device_id": "system", 00:33:52.941 "dma_device_type": 1 00:33:52.941 }, 00:33:52.941 { 00:33:52.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:52.941 "dma_device_type": 2 00:33:52.941 } 00:33:52.941 ], 00:33:52.941 "driver_specific": {} 00:33:52.941 } 00:33:52.941 ] 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.941 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:52.941 "name": "Existed_Raid", 00:33:52.941 "uuid": "5c9957e1-dc62-4585-b2e8-c2bcd04c410e", 00:33:52.941 "strip_size_kb": 64, 00:33:52.941 "state": "configuring", 00:33:52.941 "raid_level": "concat", 00:33:52.941 "superblock": true, 00:33:52.941 "num_base_bdevs": 3, 00:33:52.941 "num_base_bdevs_discovered": 2, 00:33:52.941 "num_base_bdevs_operational": 3, 00:33:52.941 "base_bdevs_list": [ 00:33:52.941 { 00:33:52.941 "name": "BaseBdev1", 00:33:52.941 "uuid": "9ff86918-f36e-47a4-b93b-a6e46ebfd534", 00:33:52.941 "is_configured": true, 00:33:52.941 "data_offset": 2048, 00:33:52.941 "data_size": 63488 00:33:52.941 }, 00:33:52.941 { 00:33:52.941 "name": "BaseBdev2", 00:33:52.941 "uuid": "342ce7bb-53b3-414d-9406-46d373fa66e4", 00:33:52.941 "is_configured": true, 00:33:52.941 "data_offset": 2048, 00:33:52.941 "data_size": 63488 00:33:52.941 }, 00:33:52.941 { 00:33:52.942 "name": "BaseBdev3", 00:33:52.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:52.942 "is_configured": false, 00:33:52.942 "data_offset": 0, 00:33:52.942 "data_size": 0 00:33:52.942 } 00:33:52.942 ] 00:33:52.942 }' 00:33:52.942 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:52.942 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.201 [2024-12-09 23:15:33.806552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:53.201 [2024-12-09 23:15:33.806829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:53.201 [2024-12-09 23:15:33.806853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:53.201 [2024-12-09 23:15:33.807151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:53.201 BaseBdev3 00:33:53.201 [2024-12-09 23:15:33.807319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:53.201 [2024-12-09 23:15:33.807336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:53.201 [2024-12-09 23:15:33.807501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.201 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.201 [ 00:33:53.201 { 00:33:53.201 "name": "BaseBdev3", 00:33:53.201 "aliases": [ 00:33:53.201 "a42a51a1-0655-4a46-931a-8bc29c24a146" 00:33:53.201 ], 00:33:53.201 "product_name": "Malloc disk", 00:33:53.201 "block_size": 512, 00:33:53.201 "num_blocks": 65536, 00:33:53.460 "uuid": "a42a51a1-0655-4a46-931a-8bc29c24a146", 00:33:53.460 "assigned_rate_limits": { 00:33:53.460 "rw_ios_per_sec": 0, 00:33:53.460 "rw_mbytes_per_sec": 0, 00:33:53.460 "r_mbytes_per_sec": 0, 00:33:53.460 "w_mbytes_per_sec": 0 00:33:53.460 }, 00:33:53.460 "claimed": true, 00:33:53.460 "claim_type": "exclusive_write", 00:33:53.460 "zoned": false, 00:33:53.460 "supported_io_types": { 00:33:53.460 "read": true, 00:33:53.460 "write": true, 00:33:53.460 "unmap": true, 00:33:53.460 "flush": true, 00:33:53.460 "reset": true, 00:33:53.460 "nvme_admin": false, 00:33:53.460 "nvme_io": false, 00:33:53.460 "nvme_io_md": false, 00:33:53.460 "write_zeroes": true, 00:33:53.460 "zcopy": true, 00:33:53.460 "get_zone_info": false, 00:33:53.460 "zone_management": false, 00:33:53.460 "zone_append": false, 00:33:53.460 "compare": false, 00:33:53.460 "compare_and_write": false, 00:33:53.460 "abort": true, 00:33:53.460 "seek_hole": false, 00:33:53.460 "seek_data": false, 00:33:53.460 "copy": true, 00:33:53.460 "nvme_iov_md": false 00:33:53.460 }, 00:33:53.460 "memory_domains": [ 00:33:53.460 { 00:33:53.460 "dma_device_id": "system", 00:33:53.460 "dma_device_type": 1 00:33:53.460 }, 00:33:53.460 { 00:33:53.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:53.460 "dma_device_type": 2 00:33:53.460 } 00:33:53.460 ], 00:33:53.460 "driver_specific": {} 00:33:53.460 } 00:33:53.460 ] 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.460 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:53.460 "name": "Existed_Raid", 00:33:53.460 "uuid": "5c9957e1-dc62-4585-b2e8-c2bcd04c410e", 00:33:53.460 "strip_size_kb": 64, 00:33:53.460 "state": "online", 00:33:53.460 "raid_level": "concat", 00:33:53.460 "superblock": true, 00:33:53.460 "num_base_bdevs": 3, 00:33:53.460 "num_base_bdevs_discovered": 3, 00:33:53.460 "num_base_bdevs_operational": 3, 00:33:53.460 "base_bdevs_list": [ 00:33:53.460 { 00:33:53.460 "name": "BaseBdev1", 00:33:53.460 "uuid": "9ff86918-f36e-47a4-b93b-a6e46ebfd534", 00:33:53.460 "is_configured": true, 00:33:53.460 "data_offset": 2048, 00:33:53.460 "data_size": 63488 00:33:53.460 }, 00:33:53.460 { 00:33:53.460 "name": "BaseBdev2", 00:33:53.460 "uuid": "342ce7bb-53b3-414d-9406-46d373fa66e4", 00:33:53.460 "is_configured": true, 00:33:53.460 "data_offset": 2048, 00:33:53.461 "data_size": 63488 00:33:53.461 }, 00:33:53.461 { 00:33:53.461 "name": "BaseBdev3", 00:33:53.461 "uuid": "a42a51a1-0655-4a46-931a-8bc29c24a146", 00:33:53.461 "is_configured": true, 00:33:53.461 "data_offset": 2048, 00:33:53.461 "data_size": 63488 00:33:53.461 } 00:33:53.461 ] 00:33:53.461 }' 00:33:53.461 23:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:53.461 23:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:53.719 [2024-12-09 23:15:34.278755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:53.719 "name": "Existed_Raid", 00:33:53.719 "aliases": [ 00:33:53.719 "5c9957e1-dc62-4585-b2e8-c2bcd04c410e" 00:33:53.719 ], 00:33:53.719 "product_name": "Raid Volume", 00:33:53.719 "block_size": 512, 00:33:53.719 "num_blocks": 190464, 00:33:53.719 "uuid": "5c9957e1-dc62-4585-b2e8-c2bcd04c410e", 00:33:53.719 "assigned_rate_limits": { 00:33:53.719 "rw_ios_per_sec": 0, 00:33:53.719 "rw_mbytes_per_sec": 0, 00:33:53.719 "r_mbytes_per_sec": 0, 00:33:53.719 "w_mbytes_per_sec": 0 00:33:53.719 }, 00:33:53.719 "claimed": false, 00:33:53.719 "zoned": false, 00:33:53.719 "supported_io_types": { 00:33:53.719 "read": true, 00:33:53.719 "write": true, 00:33:53.719 "unmap": true, 00:33:53.719 "flush": true, 00:33:53.719 "reset": true, 00:33:53.719 "nvme_admin": false, 00:33:53.719 "nvme_io": false, 00:33:53.719 "nvme_io_md": false, 00:33:53.719 "write_zeroes": true, 00:33:53.719 "zcopy": false, 00:33:53.719 "get_zone_info": false, 00:33:53.719 "zone_management": false, 00:33:53.719 "zone_append": false, 00:33:53.719 "compare": false, 00:33:53.719 "compare_and_write": false, 00:33:53.719 "abort": false, 00:33:53.719 "seek_hole": false, 00:33:53.719 "seek_data": false, 00:33:53.719 "copy": false, 00:33:53.719 "nvme_iov_md": false 00:33:53.719 }, 00:33:53.719 "memory_domains": [ 00:33:53.719 { 00:33:53.719 "dma_device_id": "system", 00:33:53.719 "dma_device_type": 1 00:33:53.719 }, 00:33:53.719 { 00:33:53.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:53.719 "dma_device_type": 2 00:33:53.719 }, 00:33:53.719 { 00:33:53.719 "dma_device_id": "system", 00:33:53.719 "dma_device_type": 1 00:33:53.719 }, 00:33:53.719 { 00:33:53.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:53.719 "dma_device_type": 2 00:33:53.719 }, 00:33:53.719 { 00:33:53.719 "dma_device_id": "system", 00:33:53.719 "dma_device_type": 1 00:33:53.719 }, 00:33:53.719 { 00:33:53.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:53.719 "dma_device_type": 2 00:33:53.719 } 00:33:53.719 ], 00:33:53.719 "driver_specific": { 00:33:53.719 "raid": { 00:33:53.719 "uuid": "5c9957e1-dc62-4585-b2e8-c2bcd04c410e", 00:33:53.719 "strip_size_kb": 64, 00:33:53.719 "state": "online", 00:33:53.719 "raid_level": "concat", 00:33:53.719 "superblock": true, 00:33:53.719 "num_base_bdevs": 3, 00:33:53.719 "num_base_bdevs_discovered": 3, 00:33:53.719 "num_base_bdevs_operational": 3, 00:33:53.719 "base_bdevs_list": [ 00:33:53.719 { 00:33:53.719 "name": "BaseBdev1", 00:33:53.719 "uuid": "9ff86918-f36e-47a4-b93b-a6e46ebfd534", 00:33:53.719 "is_configured": true, 00:33:53.719 "data_offset": 2048, 00:33:53.719 "data_size": 63488 00:33:53.719 }, 00:33:53.719 { 00:33:53.719 "name": "BaseBdev2", 00:33:53.719 "uuid": "342ce7bb-53b3-414d-9406-46d373fa66e4", 00:33:53.719 "is_configured": true, 00:33:53.719 "data_offset": 2048, 00:33:53.719 "data_size": 63488 00:33:53.719 }, 00:33:53.719 { 00:33:53.719 "name": "BaseBdev3", 00:33:53.719 "uuid": "a42a51a1-0655-4a46-931a-8bc29c24a146", 00:33:53.719 "is_configured": true, 00:33:53.719 "data_offset": 2048, 00:33:53.719 "data_size": 63488 00:33:53.719 } 00:33:53.719 ] 00:33:53.719 } 00:33:53.719 } 00:33:53.719 }' 00:33:53.719 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:53.980 BaseBdev2 00:33:53.980 BaseBdev3' 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:53.980 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.981 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:53.981 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:53.981 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:53.981 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.981 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.981 [2024-12-09 23:15:34.534375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:53.981 [2024-12-09 23:15:34.534415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:53.981 [2024-12-09 23:15:34.534476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.239 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:54.239 "name": "Existed_Raid", 00:33:54.239 "uuid": "5c9957e1-dc62-4585-b2e8-c2bcd04c410e", 00:33:54.239 "strip_size_kb": 64, 00:33:54.239 "state": "offline", 00:33:54.239 "raid_level": "concat", 00:33:54.239 "superblock": true, 00:33:54.239 "num_base_bdevs": 3, 00:33:54.239 "num_base_bdevs_discovered": 2, 00:33:54.239 "num_base_bdevs_operational": 2, 00:33:54.239 "base_bdevs_list": [ 00:33:54.239 { 00:33:54.239 "name": null, 00:33:54.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.239 "is_configured": false, 00:33:54.239 "data_offset": 0, 00:33:54.239 "data_size": 63488 00:33:54.239 }, 00:33:54.239 { 00:33:54.239 "name": "BaseBdev2", 00:33:54.239 "uuid": "342ce7bb-53b3-414d-9406-46d373fa66e4", 00:33:54.239 "is_configured": true, 00:33:54.239 "data_offset": 2048, 00:33:54.239 "data_size": 63488 00:33:54.239 }, 00:33:54.239 { 00:33:54.239 "name": "BaseBdev3", 00:33:54.239 "uuid": "a42a51a1-0655-4a46-931a-8bc29c24a146", 00:33:54.240 "is_configured": true, 00:33:54.240 "data_offset": 2048, 00:33:54.240 "data_size": 63488 00:33:54.240 } 00:33:54.240 ] 00:33:54.240 }' 00:33:54.240 23:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:54.240 23:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.498 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.498 [2024-12-09 23:15:35.119171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:54.756 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.756 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:54.756 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:54.756 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.756 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.756 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:54.756 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.756 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.756 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.757 [2024-12-09 23:15:35.272466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:54.757 [2024-12-09 23:15:35.272529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.757 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 BaseBdev2 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.015 [ 00:33:55.015 { 00:33:55.015 "name": "BaseBdev2", 00:33:55.015 "aliases": [ 00:33:55.015 "792fd33c-8d0c-42bc-b39a-a0bd83117d65" 00:33:55.015 ], 00:33:55.015 "product_name": "Malloc disk", 00:33:55.015 "block_size": 512, 00:33:55.015 "num_blocks": 65536, 00:33:55.015 "uuid": "792fd33c-8d0c-42bc-b39a-a0bd83117d65", 00:33:55.015 "assigned_rate_limits": { 00:33:55.015 "rw_ios_per_sec": 0, 00:33:55.015 "rw_mbytes_per_sec": 0, 00:33:55.015 "r_mbytes_per_sec": 0, 00:33:55.015 "w_mbytes_per_sec": 0 00:33:55.015 }, 00:33:55.015 "claimed": false, 00:33:55.015 "zoned": false, 00:33:55.015 "supported_io_types": { 00:33:55.015 "read": true, 00:33:55.015 "write": true, 00:33:55.015 "unmap": true, 00:33:55.015 "flush": true, 00:33:55.015 "reset": true, 00:33:55.015 "nvme_admin": false, 00:33:55.015 "nvme_io": false, 00:33:55.015 "nvme_io_md": false, 00:33:55.015 "write_zeroes": true, 00:33:55.015 "zcopy": true, 00:33:55.015 "get_zone_info": false, 00:33:55.015 "zone_management": false, 00:33:55.015 "zone_append": false, 00:33:55.015 "compare": false, 00:33:55.015 "compare_and_write": false, 00:33:55.015 "abort": true, 00:33:55.015 "seek_hole": false, 00:33:55.015 "seek_data": false, 00:33:55.015 "copy": true, 00:33:55.015 "nvme_iov_md": false 00:33:55.015 }, 00:33:55.015 "memory_domains": [ 00:33:55.015 { 00:33:55.015 "dma_device_id": "system", 00:33:55.015 "dma_device_type": 1 00:33:55.015 }, 00:33:55.015 { 00:33:55.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:55.015 "dma_device_type": 2 00:33:55.015 } 00:33:55.015 ], 00:33:55.015 "driver_specific": {} 00:33:55.015 } 00:33:55.015 ] 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:55.015 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.016 BaseBdev3 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.016 [ 00:33:55.016 { 00:33:55.016 "name": "BaseBdev3", 00:33:55.016 "aliases": [ 00:33:55.016 "83aa5990-6847-4cf8-8963-7f1d201a350b" 00:33:55.016 ], 00:33:55.016 "product_name": "Malloc disk", 00:33:55.016 "block_size": 512, 00:33:55.016 "num_blocks": 65536, 00:33:55.016 "uuid": "83aa5990-6847-4cf8-8963-7f1d201a350b", 00:33:55.016 "assigned_rate_limits": { 00:33:55.016 "rw_ios_per_sec": 0, 00:33:55.016 "rw_mbytes_per_sec": 0, 00:33:55.016 "r_mbytes_per_sec": 0, 00:33:55.016 "w_mbytes_per_sec": 0 00:33:55.016 }, 00:33:55.016 "claimed": false, 00:33:55.016 "zoned": false, 00:33:55.016 "supported_io_types": { 00:33:55.016 "read": true, 00:33:55.016 "write": true, 00:33:55.016 "unmap": true, 00:33:55.016 "flush": true, 00:33:55.016 "reset": true, 00:33:55.016 "nvme_admin": false, 00:33:55.016 "nvme_io": false, 00:33:55.016 "nvme_io_md": false, 00:33:55.016 "write_zeroes": true, 00:33:55.016 "zcopy": true, 00:33:55.016 "get_zone_info": false, 00:33:55.016 "zone_management": false, 00:33:55.016 "zone_append": false, 00:33:55.016 "compare": false, 00:33:55.016 "compare_and_write": false, 00:33:55.016 "abort": true, 00:33:55.016 "seek_hole": false, 00:33:55.016 "seek_data": false, 00:33:55.016 "copy": true, 00:33:55.016 "nvme_iov_md": false 00:33:55.016 }, 00:33:55.016 "memory_domains": [ 00:33:55.016 { 00:33:55.016 "dma_device_id": "system", 00:33:55.016 "dma_device_type": 1 00:33:55.016 }, 00:33:55.016 { 00:33:55.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:55.016 "dma_device_type": 2 00:33:55.016 } 00:33:55.016 ], 00:33:55.016 "driver_specific": {} 00:33:55.016 } 00:33:55.016 ] 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.016 [2024-12-09 23:15:35.604430] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:55.016 [2024-12-09 23:15:35.604481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:55.016 [2024-12-09 23:15:35.604526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:55.016 [2024-12-09 23:15:35.606719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.016 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.275 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:55.275 "name": "Existed_Raid", 00:33:55.275 "uuid": "ba5a31ab-50a5-4420-8084-2d3c02ef3313", 00:33:55.275 "strip_size_kb": 64, 00:33:55.275 "state": "configuring", 00:33:55.275 "raid_level": "concat", 00:33:55.275 "superblock": true, 00:33:55.275 "num_base_bdevs": 3, 00:33:55.275 "num_base_bdevs_discovered": 2, 00:33:55.275 "num_base_bdevs_operational": 3, 00:33:55.275 "base_bdevs_list": [ 00:33:55.275 { 00:33:55.275 "name": "BaseBdev1", 00:33:55.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.275 "is_configured": false, 00:33:55.275 "data_offset": 0, 00:33:55.275 "data_size": 0 00:33:55.275 }, 00:33:55.275 { 00:33:55.275 "name": "BaseBdev2", 00:33:55.275 "uuid": "792fd33c-8d0c-42bc-b39a-a0bd83117d65", 00:33:55.275 "is_configured": true, 00:33:55.275 "data_offset": 2048, 00:33:55.275 "data_size": 63488 00:33:55.275 }, 00:33:55.275 { 00:33:55.275 "name": "BaseBdev3", 00:33:55.275 "uuid": "83aa5990-6847-4cf8-8963-7f1d201a350b", 00:33:55.275 "is_configured": true, 00:33:55.275 "data_offset": 2048, 00:33:55.275 "data_size": 63488 00:33:55.275 } 00:33:55.275 ] 00:33:55.275 }' 00:33:55.275 23:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:55.275 23:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.534 [2024-12-09 23:15:36.011849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:55.534 "name": "Existed_Raid", 00:33:55.534 "uuid": "ba5a31ab-50a5-4420-8084-2d3c02ef3313", 00:33:55.534 "strip_size_kb": 64, 00:33:55.534 "state": "configuring", 00:33:55.534 "raid_level": "concat", 00:33:55.534 "superblock": true, 00:33:55.534 "num_base_bdevs": 3, 00:33:55.534 "num_base_bdevs_discovered": 1, 00:33:55.534 "num_base_bdevs_operational": 3, 00:33:55.534 "base_bdevs_list": [ 00:33:55.534 { 00:33:55.534 "name": "BaseBdev1", 00:33:55.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.534 "is_configured": false, 00:33:55.534 "data_offset": 0, 00:33:55.534 "data_size": 0 00:33:55.534 }, 00:33:55.534 { 00:33:55.534 "name": null, 00:33:55.534 "uuid": "792fd33c-8d0c-42bc-b39a-a0bd83117d65", 00:33:55.534 "is_configured": false, 00:33:55.534 "data_offset": 0, 00:33:55.534 "data_size": 63488 00:33:55.534 }, 00:33:55.534 { 00:33:55.534 "name": "BaseBdev3", 00:33:55.534 "uuid": "83aa5990-6847-4cf8-8963-7f1d201a350b", 00:33:55.534 "is_configured": true, 00:33:55.534 "data_offset": 2048, 00:33:55.534 "data_size": 63488 00:33:55.534 } 00:33:55.534 ] 00:33:55.534 }' 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:55.534 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.792 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.792 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:55.792 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.792 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.792 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.792 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:55.792 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:55.792 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.792 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.051 [2024-12-09 23:15:36.456002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:56.051 BaseBdev1 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.051 [ 00:33:56.051 { 00:33:56.051 "name": "BaseBdev1", 00:33:56.051 "aliases": [ 00:33:56.051 "b43e4801-8bf4-42bd-b727-30d127f65792" 00:33:56.051 ], 00:33:56.051 "product_name": "Malloc disk", 00:33:56.051 "block_size": 512, 00:33:56.051 "num_blocks": 65536, 00:33:56.051 "uuid": "b43e4801-8bf4-42bd-b727-30d127f65792", 00:33:56.051 "assigned_rate_limits": { 00:33:56.051 "rw_ios_per_sec": 0, 00:33:56.051 "rw_mbytes_per_sec": 0, 00:33:56.051 "r_mbytes_per_sec": 0, 00:33:56.051 "w_mbytes_per_sec": 0 00:33:56.051 }, 00:33:56.051 "claimed": true, 00:33:56.051 "claim_type": "exclusive_write", 00:33:56.051 "zoned": false, 00:33:56.051 "supported_io_types": { 00:33:56.051 "read": true, 00:33:56.051 "write": true, 00:33:56.051 "unmap": true, 00:33:56.051 "flush": true, 00:33:56.051 "reset": true, 00:33:56.051 "nvme_admin": false, 00:33:56.051 "nvme_io": false, 00:33:56.051 "nvme_io_md": false, 00:33:56.051 "write_zeroes": true, 00:33:56.051 "zcopy": true, 00:33:56.051 "get_zone_info": false, 00:33:56.051 "zone_management": false, 00:33:56.051 "zone_append": false, 00:33:56.051 "compare": false, 00:33:56.051 "compare_and_write": false, 00:33:56.051 "abort": true, 00:33:56.051 "seek_hole": false, 00:33:56.051 "seek_data": false, 00:33:56.051 "copy": true, 00:33:56.051 "nvme_iov_md": false 00:33:56.051 }, 00:33:56.051 "memory_domains": [ 00:33:56.051 { 00:33:56.051 "dma_device_id": "system", 00:33:56.051 "dma_device_type": 1 00:33:56.051 }, 00:33:56.051 { 00:33:56.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:56.051 "dma_device_type": 2 00:33:56.051 } 00:33:56.051 ], 00:33:56.051 "driver_specific": {} 00:33:56.051 } 00:33:56.051 ] 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:56.051 "name": "Existed_Raid", 00:33:56.051 "uuid": "ba5a31ab-50a5-4420-8084-2d3c02ef3313", 00:33:56.051 "strip_size_kb": 64, 00:33:56.051 "state": "configuring", 00:33:56.051 "raid_level": "concat", 00:33:56.051 "superblock": true, 00:33:56.051 "num_base_bdevs": 3, 00:33:56.051 "num_base_bdevs_discovered": 2, 00:33:56.051 "num_base_bdevs_operational": 3, 00:33:56.051 "base_bdevs_list": [ 00:33:56.051 { 00:33:56.051 "name": "BaseBdev1", 00:33:56.051 "uuid": "b43e4801-8bf4-42bd-b727-30d127f65792", 00:33:56.051 "is_configured": true, 00:33:56.051 "data_offset": 2048, 00:33:56.051 "data_size": 63488 00:33:56.051 }, 00:33:56.051 { 00:33:56.051 "name": null, 00:33:56.051 "uuid": "792fd33c-8d0c-42bc-b39a-a0bd83117d65", 00:33:56.051 "is_configured": false, 00:33:56.051 "data_offset": 0, 00:33:56.051 "data_size": 63488 00:33:56.051 }, 00:33:56.051 { 00:33:56.051 "name": "BaseBdev3", 00:33:56.051 "uuid": "83aa5990-6847-4cf8-8963-7f1d201a350b", 00:33:56.051 "is_configured": true, 00:33:56.051 "data_offset": 2048, 00:33:56.051 "data_size": 63488 00:33:56.051 } 00:33:56.051 ] 00:33:56.051 }' 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:56.051 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.311 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:56.311 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.311 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.311 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.570 [2024-12-09 23:15:36.963384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.570 23:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.570 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:56.570 "name": "Existed_Raid", 00:33:56.570 "uuid": "ba5a31ab-50a5-4420-8084-2d3c02ef3313", 00:33:56.570 "strip_size_kb": 64, 00:33:56.570 "state": "configuring", 00:33:56.571 "raid_level": "concat", 00:33:56.571 "superblock": true, 00:33:56.571 "num_base_bdevs": 3, 00:33:56.571 "num_base_bdevs_discovered": 1, 00:33:56.571 "num_base_bdevs_operational": 3, 00:33:56.571 "base_bdevs_list": [ 00:33:56.571 { 00:33:56.571 "name": "BaseBdev1", 00:33:56.571 "uuid": "b43e4801-8bf4-42bd-b727-30d127f65792", 00:33:56.571 "is_configured": true, 00:33:56.571 "data_offset": 2048, 00:33:56.571 "data_size": 63488 00:33:56.571 }, 00:33:56.571 { 00:33:56.571 "name": null, 00:33:56.571 "uuid": "792fd33c-8d0c-42bc-b39a-a0bd83117d65", 00:33:56.571 "is_configured": false, 00:33:56.571 "data_offset": 0, 00:33:56.571 "data_size": 63488 00:33:56.571 }, 00:33:56.571 { 00:33:56.571 "name": null, 00:33:56.571 "uuid": "83aa5990-6847-4cf8-8963-7f1d201a350b", 00:33:56.571 "is_configured": false, 00:33:56.571 "data_offset": 0, 00:33:56.571 "data_size": 63488 00:33:56.571 } 00:33:56.571 ] 00:33:56.571 }' 00:33:56.571 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:56.571 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.829 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.829 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:56.829 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.829 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.829 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.830 [2024-12-09 23:15:37.446736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.830 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.099 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.099 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:57.099 "name": "Existed_Raid", 00:33:57.099 "uuid": "ba5a31ab-50a5-4420-8084-2d3c02ef3313", 00:33:57.099 "strip_size_kb": 64, 00:33:57.099 "state": "configuring", 00:33:57.099 "raid_level": "concat", 00:33:57.099 "superblock": true, 00:33:57.099 "num_base_bdevs": 3, 00:33:57.099 "num_base_bdevs_discovered": 2, 00:33:57.099 "num_base_bdevs_operational": 3, 00:33:57.099 "base_bdevs_list": [ 00:33:57.099 { 00:33:57.099 "name": "BaseBdev1", 00:33:57.099 "uuid": "b43e4801-8bf4-42bd-b727-30d127f65792", 00:33:57.099 "is_configured": true, 00:33:57.099 "data_offset": 2048, 00:33:57.099 "data_size": 63488 00:33:57.099 }, 00:33:57.099 { 00:33:57.099 "name": null, 00:33:57.099 "uuid": "792fd33c-8d0c-42bc-b39a-a0bd83117d65", 00:33:57.099 "is_configured": false, 00:33:57.099 "data_offset": 0, 00:33:57.099 "data_size": 63488 00:33:57.099 }, 00:33:57.099 { 00:33:57.099 "name": "BaseBdev3", 00:33:57.099 "uuid": "83aa5990-6847-4cf8-8963-7f1d201a350b", 00:33:57.099 "is_configured": true, 00:33:57.099 "data_offset": 2048, 00:33:57.099 "data_size": 63488 00:33:57.099 } 00:33:57.099 ] 00:33:57.099 }' 00:33:57.099 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:57.099 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.358 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.358 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:57.358 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.358 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.358 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.358 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:57.358 23:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:57.358 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.358 23:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.358 [2024-12-09 23:15:37.922373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:57.616 "name": "Existed_Raid", 00:33:57.616 "uuid": "ba5a31ab-50a5-4420-8084-2d3c02ef3313", 00:33:57.616 "strip_size_kb": 64, 00:33:57.616 "state": "configuring", 00:33:57.616 "raid_level": "concat", 00:33:57.616 "superblock": true, 00:33:57.616 "num_base_bdevs": 3, 00:33:57.616 "num_base_bdevs_discovered": 1, 00:33:57.616 "num_base_bdevs_operational": 3, 00:33:57.616 "base_bdevs_list": [ 00:33:57.616 { 00:33:57.616 "name": null, 00:33:57.616 "uuid": "b43e4801-8bf4-42bd-b727-30d127f65792", 00:33:57.616 "is_configured": false, 00:33:57.616 "data_offset": 0, 00:33:57.616 "data_size": 63488 00:33:57.616 }, 00:33:57.616 { 00:33:57.616 "name": null, 00:33:57.616 "uuid": "792fd33c-8d0c-42bc-b39a-a0bd83117d65", 00:33:57.616 "is_configured": false, 00:33:57.616 "data_offset": 0, 00:33:57.616 "data_size": 63488 00:33:57.616 }, 00:33:57.616 { 00:33:57.616 "name": "BaseBdev3", 00:33:57.616 "uuid": "83aa5990-6847-4cf8-8963-7f1d201a350b", 00:33:57.616 "is_configured": true, 00:33:57.616 "data_offset": 2048, 00:33:57.616 "data_size": 63488 00:33:57.616 } 00:33:57.616 ] 00:33:57.616 }' 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:57.616 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.874 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.874 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.874 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:57.874 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.874 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.131 [2024-12-09 23:15:38.529335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:58.131 "name": "Existed_Raid", 00:33:58.131 "uuid": "ba5a31ab-50a5-4420-8084-2d3c02ef3313", 00:33:58.131 "strip_size_kb": 64, 00:33:58.131 "state": "configuring", 00:33:58.131 "raid_level": "concat", 00:33:58.131 "superblock": true, 00:33:58.131 "num_base_bdevs": 3, 00:33:58.131 "num_base_bdevs_discovered": 2, 00:33:58.131 "num_base_bdevs_operational": 3, 00:33:58.131 "base_bdevs_list": [ 00:33:58.131 { 00:33:58.131 "name": null, 00:33:58.131 "uuid": "b43e4801-8bf4-42bd-b727-30d127f65792", 00:33:58.131 "is_configured": false, 00:33:58.131 "data_offset": 0, 00:33:58.131 "data_size": 63488 00:33:58.131 }, 00:33:58.131 { 00:33:58.131 "name": "BaseBdev2", 00:33:58.131 "uuid": "792fd33c-8d0c-42bc-b39a-a0bd83117d65", 00:33:58.131 "is_configured": true, 00:33:58.131 "data_offset": 2048, 00:33:58.131 "data_size": 63488 00:33:58.131 }, 00:33:58.131 { 00:33:58.131 "name": "BaseBdev3", 00:33:58.131 "uuid": "83aa5990-6847-4cf8-8963-7f1d201a350b", 00:33:58.131 "is_configured": true, 00:33:58.131 "data_offset": 2048, 00:33:58.131 "data_size": 63488 00:33:58.131 } 00:33:58.131 ] 00:33:58.131 }' 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:58.131 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.389 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:58.389 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:58.389 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.389 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.389 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.389 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:58.389 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:58.389 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.389 23:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.389 23:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:58.389 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.647 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b43e4801-8bf4-42bd-b727-30d127f65792 00:33:58.647 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.647 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.647 [2024-12-09 23:15:39.072166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:58.647 [2024-12-09 23:15:39.072454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:58.647 [2024-12-09 23:15:39.072478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:58.647 [2024-12-09 23:15:39.072762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:58.647 [2024-12-09 23:15:39.072926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:58.647 [2024-12-09 23:15:39.072941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:33:58.647 [2024-12-09 23:15:39.073082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:58.647 NewBaseBdev 00:33:58.647 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.648 [ 00:33:58.648 { 00:33:58.648 "name": "NewBaseBdev", 00:33:58.648 "aliases": [ 00:33:58.648 "b43e4801-8bf4-42bd-b727-30d127f65792" 00:33:58.648 ], 00:33:58.648 "product_name": "Malloc disk", 00:33:58.648 "block_size": 512, 00:33:58.648 "num_blocks": 65536, 00:33:58.648 "uuid": "b43e4801-8bf4-42bd-b727-30d127f65792", 00:33:58.648 "assigned_rate_limits": { 00:33:58.648 "rw_ios_per_sec": 0, 00:33:58.648 "rw_mbytes_per_sec": 0, 00:33:58.648 "r_mbytes_per_sec": 0, 00:33:58.648 "w_mbytes_per_sec": 0 00:33:58.648 }, 00:33:58.648 "claimed": true, 00:33:58.648 "claim_type": "exclusive_write", 00:33:58.648 "zoned": false, 00:33:58.648 "supported_io_types": { 00:33:58.648 "read": true, 00:33:58.648 "write": true, 00:33:58.648 "unmap": true, 00:33:58.648 "flush": true, 00:33:58.648 "reset": true, 00:33:58.648 "nvme_admin": false, 00:33:58.648 "nvme_io": false, 00:33:58.648 "nvme_io_md": false, 00:33:58.648 "write_zeroes": true, 00:33:58.648 "zcopy": true, 00:33:58.648 "get_zone_info": false, 00:33:58.648 "zone_management": false, 00:33:58.648 "zone_append": false, 00:33:58.648 "compare": false, 00:33:58.648 "compare_and_write": false, 00:33:58.648 "abort": true, 00:33:58.648 "seek_hole": false, 00:33:58.648 "seek_data": false, 00:33:58.648 "copy": true, 00:33:58.648 "nvme_iov_md": false 00:33:58.648 }, 00:33:58.648 "memory_domains": [ 00:33:58.648 { 00:33:58.648 "dma_device_id": "system", 00:33:58.648 "dma_device_type": 1 00:33:58.648 }, 00:33:58.648 { 00:33:58.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:58.648 "dma_device_type": 2 00:33:58.648 } 00:33:58.648 ], 00:33:58.648 "driver_specific": {} 00:33:58.648 } 00:33:58.648 ] 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:58.648 "name": "Existed_Raid", 00:33:58.648 "uuid": "ba5a31ab-50a5-4420-8084-2d3c02ef3313", 00:33:58.648 "strip_size_kb": 64, 00:33:58.648 "state": "online", 00:33:58.648 "raid_level": "concat", 00:33:58.648 "superblock": true, 00:33:58.648 "num_base_bdevs": 3, 00:33:58.648 "num_base_bdevs_discovered": 3, 00:33:58.648 "num_base_bdevs_operational": 3, 00:33:58.648 "base_bdevs_list": [ 00:33:58.648 { 00:33:58.648 "name": "NewBaseBdev", 00:33:58.648 "uuid": "b43e4801-8bf4-42bd-b727-30d127f65792", 00:33:58.648 "is_configured": true, 00:33:58.648 "data_offset": 2048, 00:33:58.648 "data_size": 63488 00:33:58.648 }, 00:33:58.648 { 00:33:58.648 "name": "BaseBdev2", 00:33:58.648 "uuid": "792fd33c-8d0c-42bc-b39a-a0bd83117d65", 00:33:58.648 "is_configured": true, 00:33:58.648 "data_offset": 2048, 00:33:58.648 "data_size": 63488 00:33:58.648 }, 00:33:58.648 { 00:33:58.648 "name": "BaseBdev3", 00:33:58.648 "uuid": "83aa5990-6847-4cf8-8963-7f1d201a350b", 00:33:58.648 "is_configured": true, 00:33:58.648 "data_offset": 2048, 00:33:58.648 "data_size": 63488 00:33:58.648 } 00:33:58.648 ] 00:33:58.648 }' 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:58.648 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.212 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.213 [2024-12-09 23:15:39.567903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:59.213 "name": "Existed_Raid", 00:33:59.213 "aliases": [ 00:33:59.213 "ba5a31ab-50a5-4420-8084-2d3c02ef3313" 00:33:59.213 ], 00:33:59.213 "product_name": "Raid Volume", 00:33:59.213 "block_size": 512, 00:33:59.213 "num_blocks": 190464, 00:33:59.213 "uuid": "ba5a31ab-50a5-4420-8084-2d3c02ef3313", 00:33:59.213 "assigned_rate_limits": { 00:33:59.213 "rw_ios_per_sec": 0, 00:33:59.213 "rw_mbytes_per_sec": 0, 00:33:59.213 "r_mbytes_per_sec": 0, 00:33:59.213 "w_mbytes_per_sec": 0 00:33:59.213 }, 00:33:59.213 "claimed": false, 00:33:59.213 "zoned": false, 00:33:59.213 "supported_io_types": { 00:33:59.213 "read": true, 00:33:59.213 "write": true, 00:33:59.213 "unmap": true, 00:33:59.213 "flush": true, 00:33:59.213 "reset": true, 00:33:59.213 "nvme_admin": false, 00:33:59.213 "nvme_io": false, 00:33:59.213 "nvme_io_md": false, 00:33:59.213 "write_zeroes": true, 00:33:59.213 "zcopy": false, 00:33:59.213 "get_zone_info": false, 00:33:59.213 "zone_management": false, 00:33:59.213 "zone_append": false, 00:33:59.213 "compare": false, 00:33:59.213 "compare_and_write": false, 00:33:59.213 "abort": false, 00:33:59.213 "seek_hole": false, 00:33:59.213 "seek_data": false, 00:33:59.213 "copy": false, 00:33:59.213 "nvme_iov_md": false 00:33:59.213 }, 00:33:59.213 "memory_domains": [ 00:33:59.213 { 00:33:59.213 "dma_device_id": "system", 00:33:59.213 "dma_device_type": 1 00:33:59.213 }, 00:33:59.213 { 00:33:59.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:59.213 "dma_device_type": 2 00:33:59.213 }, 00:33:59.213 { 00:33:59.213 "dma_device_id": "system", 00:33:59.213 "dma_device_type": 1 00:33:59.213 }, 00:33:59.213 { 00:33:59.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:59.213 "dma_device_type": 2 00:33:59.213 }, 00:33:59.213 { 00:33:59.213 "dma_device_id": "system", 00:33:59.213 "dma_device_type": 1 00:33:59.213 }, 00:33:59.213 { 00:33:59.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:59.213 "dma_device_type": 2 00:33:59.213 } 00:33:59.213 ], 00:33:59.213 "driver_specific": { 00:33:59.213 "raid": { 00:33:59.213 "uuid": "ba5a31ab-50a5-4420-8084-2d3c02ef3313", 00:33:59.213 "strip_size_kb": 64, 00:33:59.213 "state": "online", 00:33:59.213 "raid_level": "concat", 00:33:59.213 "superblock": true, 00:33:59.213 "num_base_bdevs": 3, 00:33:59.213 "num_base_bdevs_discovered": 3, 00:33:59.213 "num_base_bdevs_operational": 3, 00:33:59.213 "base_bdevs_list": [ 00:33:59.213 { 00:33:59.213 "name": "NewBaseBdev", 00:33:59.213 "uuid": "b43e4801-8bf4-42bd-b727-30d127f65792", 00:33:59.213 "is_configured": true, 00:33:59.213 "data_offset": 2048, 00:33:59.213 "data_size": 63488 00:33:59.213 }, 00:33:59.213 { 00:33:59.213 "name": "BaseBdev2", 00:33:59.213 "uuid": "792fd33c-8d0c-42bc-b39a-a0bd83117d65", 00:33:59.213 "is_configured": true, 00:33:59.213 "data_offset": 2048, 00:33:59.213 "data_size": 63488 00:33:59.213 }, 00:33:59.213 { 00:33:59.213 "name": "BaseBdev3", 00:33:59.213 "uuid": "83aa5990-6847-4cf8-8963-7f1d201a350b", 00:33:59.213 "is_configured": true, 00:33:59.213 "data_offset": 2048, 00:33:59.213 "data_size": 63488 00:33:59.213 } 00:33:59.213 ] 00:33:59.213 } 00:33:59.213 } 00:33:59.213 }' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:59.213 BaseBdev2 00:33:59.213 BaseBdev3' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.213 [2024-12-09 23:15:39.819238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:59.213 [2024-12-09 23:15:39.819275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:59.213 [2024-12-09 23:15:39.819364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:59.213 [2024-12-09 23:15:39.819436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:59.213 [2024-12-09 23:15:39.819453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66116 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66116 ']' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66116 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:59.213 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66116 00:33:59.472 killing process with pid 66116 00:33:59.472 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:59.472 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:59.472 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66116' 00:33:59.472 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66116 00:33:59.472 [2024-12-09 23:15:39.864956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:59.472 23:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66116 00:33:59.730 [2024-12-09 23:15:40.190014] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:01.104 23:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:34:01.104 00:34:01.104 real 0m10.654s 00:34:01.104 user 0m16.827s 00:34:01.104 sys 0m2.145s 00:34:01.104 23:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.104 ************************************ 00:34:01.104 END TEST raid_state_function_test_sb 00:34:01.104 23:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.104 ************************************ 00:34:01.104 23:15:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:34:01.104 23:15:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:01.104 23:15:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.104 23:15:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:01.104 ************************************ 00:34:01.104 START TEST raid_superblock_test 00:34:01.104 ************************************ 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66731 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66731 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66731 ']' 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:01.104 23:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.104 [2024-12-09 23:15:41.557563] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:34:01.104 [2024-12-09 23:15:41.557698] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66731 ] 00:34:01.362 [2024-12-09 23:15:41.742434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.362 [2024-12-09 23:15:41.857599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.620 [2024-12-09 23:15:42.075406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:01.620 [2024-12-09 23:15:42.075488] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.878 malloc1 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.878 [2024-12-09 23:15:42.474932] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:01.878 [2024-12-09 23:15:42.475001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:01.878 [2024-12-09 23:15:42.475027] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:01.878 [2024-12-09 23:15:42.475040] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:01.878 [2024-12-09 23:15:42.477614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:01.878 [2024-12-09 23:15:42.477657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:01.878 pt1 00:34:01.878 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.879 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.143 malloc2 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.143 [2024-12-09 23:15:42.537735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:02.143 [2024-12-09 23:15:42.537951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:02.143 [2024-12-09 23:15:42.537993] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:02.143 [2024-12-09 23:15:42.538006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:02.143 [2024-12-09 23:15:42.540612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:02.143 [2024-12-09 23:15:42.540656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:02.143 pt2 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.143 malloc3 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.143 [2024-12-09 23:15:42.610894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:02.143 [2024-12-09 23:15:42.611009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:02.143 [2024-12-09 23:15:42.611066] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:02.143 [2024-12-09 23:15:42.611106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:02.143 [2024-12-09 23:15:42.613744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:02.143 [2024-12-09 23:15:42.613912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:02.143 pt3 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:02.143 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.144 [2024-12-09 23:15:42.622953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:02.144 [2024-12-09 23:15:42.625103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:02.144 [2024-12-09 23:15:42.625178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:02.144 [2024-12-09 23:15:42.625344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:02.144 [2024-12-09 23:15:42.625360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:02.144 [2024-12-09 23:15:42.625666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:02.144 [2024-12-09 23:15:42.625822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:02.144 [2024-12-09 23:15:42.625832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:02.144 [2024-12-09 23:15:42.626018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:02.144 "name": "raid_bdev1", 00:34:02.144 "uuid": "11e2aeb3-04b2-4ca5-abee-092c48a8b810", 00:34:02.144 "strip_size_kb": 64, 00:34:02.144 "state": "online", 00:34:02.144 "raid_level": "concat", 00:34:02.144 "superblock": true, 00:34:02.144 "num_base_bdevs": 3, 00:34:02.144 "num_base_bdevs_discovered": 3, 00:34:02.144 "num_base_bdevs_operational": 3, 00:34:02.144 "base_bdevs_list": [ 00:34:02.144 { 00:34:02.144 "name": "pt1", 00:34:02.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:02.144 "is_configured": true, 00:34:02.144 "data_offset": 2048, 00:34:02.144 "data_size": 63488 00:34:02.144 }, 00:34:02.144 { 00:34:02.144 "name": "pt2", 00:34:02.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:02.144 "is_configured": true, 00:34:02.144 "data_offset": 2048, 00:34:02.144 "data_size": 63488 00:34:02.144 }, 00:34:02.144 { 00:34:02.144 "name": "pt3", 00:34:02.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:02.144 "is_configured": true, 00:34:02.144 "data_offset": 2048, 00:34:02.144 "data_size": 63488 00:34:02.144 } 00:34:02.144 ] 00:34:02.144 }' 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:02.144 23:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.403 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:02.403 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:02.403 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:02.403 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:02.403 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:02.403 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:02.662 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:02.662 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:02.662 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.662 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.662 [2024-12-09 23:15:43.050675] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:02.662 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.662 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:02.662 "name": "raid_bdev1", 00:34:02.662 "aliases": [ 00:34:02.662 "11e2aeb3-04b2-4ca5-abee-092c48a8b810" 00:34:02.662 ], 00:34:02.662 "product_name": "Raid Volume", 00:34:02.662 "block_size": 512, 00:34:02.662 "num_blocks": 190464, 00:34:02.662 "uuid": "11e2aeb3-04b2-4ca5-abee-092c48a8b810", 00:34:02.662 "assigned_rate_limits": { 00:34:02.662 "rw_ios_per_sec": 0, 00:34:02.662 "rw_mbytes_per_sec": 0, 00:34:02.662 "r_mbytes_per_sec": 0, 00:34:02.662 "w_mbytes_per_sec": 0 00:34:02.662 }, 00:34:02.662 "claimed": false, 00:34:02.662 "zoned": false, 00:34:02.662 "supported_io_types": { 00:34:02.662 "read": true, 00:34:02.662 "write": true, 00:34:02.662 "unmap": true, 00:34:02.662 "flush": true, 00:34:02.662 "reset": true, 00:34:02.662 "nvme_admin": false, 00:34:02.662 "nvme_io": false, 00:34:02.662 "nvme_io_md": false, 00:34:02.662 "write_zeroes": true, 00:34:02.662 "zcopy": false, 00:34:02.662 "get_zone_info": false, 00:34:02.662 "zone_management": false, 00:34:02.662 "zone_append": false, 00:34:02.662 "compare": false, 00:34:02.662 "compare_and_write": false, 00:34:02.662 "abort": false, 00:34:02.662 "seek_hole": false, 00:34:02.662 "seek_data": false, 00:34:02.662 "copy": false, 00:34:02.662 "nvme_iov_md": false 00:34:02.662 }, 00:34:02.662 "memory_domains": [ 00:34:02.662 { 00:34:02.662 "dma_device_id": "system", 00:34:02.662 "dma_device_type": 1 00:34:02.662 }, 00:34:02.662 { 00:34:02.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:02.662 "dma_device_type": 2 00:34:02.662 }, 00:34:02.662 { 00:34:02.662 "dma_device_id": "system", 00:34:02.662 "dma_device_type": 1 00:34:02.662 }, 00:34:02.662 { 00:34:02.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:02.662 "dma_device_type": 2 00:34:02.662 }, 00:34:02.662 { 00:34:02.662 "dma_device_id": "system", 00:34:02.662 "dma_device_type": 1 00:34:02.662 }, 00:34:02.662 { 00:34:02.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:02.662 "dma_device_type": 2 00:34:02.662 } 00:34:02.662 ], 00:34:02.662 "driver_specific": { 00:34:02.662 "raid": { 00:34:02.662 "uuid": "11e2aeb3-04b2-4ca5-abee-092c48a8b810", 00:34:02.662 "strip_size_kb": 64, 00:34:02.662 "state": "online", 00:34:02.662 "raid_level": "concat", 00:34:02.662 "superblock": true, 00:34:02.662 "num_base_bdevs": 3, 00:34:02.662 "num_base_bdevs_discovered": 3, 00:34:02.662 "num_base_bdevs_operational": 3, 00:34:02.662 "base_bdevs_list": [ 00:34:02.662 { 00:34:02.662 "name": "pt1", 00:34:02.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:02.662 "is_configured": true, 00:34:02.662 "data_offset": 2048, 00:34:02.662 "data_size": 63488 00:34:02.662 }, 00:34:02.662 { 00:34:02.662 "name": "pt2", 00:34:02.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:02.662 "is_configured": true, 00:34:02.662 "data_offset": 2048, 00:34:02.662 "data_size": 63488 00:34:02.662 }, 00:34:02.662 { 00:34:02.662 "name": "pt3", 00:34:02.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:02.663 "is_configured": true, 00:34:02.663 "data_offset": 2048, 00:34:02.663 "data_size": 63488 00:34:02.663 } 00:34:02.663 ] 00:34:02.663 } 00:34:02.663 } 00:34:02.663 }' 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:02.663 pt2 00:34:02.663 pt3' 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:02.663 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.922 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 [2024-12-09 23:15:43.314626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=11e2aeb3-04b2-4ca5-abee-092c48a8b810 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 11e2aeb3-04b2-4ca5-abee-092c48a8b810 ']' 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 [2024-12-09 23:15:43.358323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:02.923 [2024-12-09 23:15:43.358498] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:02.923 [2024-12-09 23:15:43.358616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:02.923 [2024-12-09 23:15:43.358685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:02.923 [2024-12-09 23:15:43.358698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 [2024-12-09 23:15:43.498411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:02.923 [2024-12-09 23:15:43.500798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:02.923 [2024-12-09 23:15:43.500855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:02.923 [2024-12-09 23:15:43.500914] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:02.923 [2024-12-09 23:15:43.500979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:02.923 [2024-12-09 23:15:43.501003] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:34:02.923 [2024-12-09 23:15:43.501026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:02.923 [2024-12-09 23:15:43.501037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:34:02.923 request: 00:34:02.923 { 00:34:02.923 "name": "raid_bdev1", 00:34:02.923 "raid_level": "concat", 00:34:02.923 "base_bdevs": [ 00:34:02.923 "malloc1", 00:34:02.923 "malloc2", 00:34:02.923 "malloc3" 00:34:02.923 ], 00:34:02.923 "strip_size_kb": 64, 00:34:02.923 "superblock": false, 00:34:02.923 "method": "bdev_raid_create", 00:34:02.923 "req_id": 1 00:34:02.923 } 00:34:02.923 Got JSON-RPC error response 00:34:02.923 response: 00:34:02.923 { 00:34:02.923 "code": -17, 00:34:02.923 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:02.923 } 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 [2024-12-09 23:15:43.542319] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:02.923 [2024-12-09 23:15:43.542515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:02.923 [2024-12-09 23:15:43.542579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:34:02.923 [2024-12-09 23:15:43.542666] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:02.923 [2024-12-09 23:15:43.545235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:02.923 [2024-12-09 23:15:43.545382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:02.923 [2024-12-09 23:15:43.545584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:02.923 [2024-12-09 23:15:43.545736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:02.923 pt1 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.923 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.182 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.182 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:03.182 "name": "raid_bdev1", 00:34:03.182 "uuid": "11e2aeb3-04b2-4ca5-abee-092c48a8b810", 00:34:03.182 "strip_size_kb": 64, 00:34:03.182 "state": "configuring", 00:34:03.182 "raid_level": "concat", 00:34:03.182 "superblock": true, 00:34:03.182 "num_base_bdevs": 3, 00:34:03.182 "num_base_bdevs_discovered": 1, 00:34:03.182 "num_base_bdevs_operational": 3, 00:34:03.182 "base_bdevs_list": [ 00:34:03.182 { 00:34:03.182 "name": "pt1", 00:34:03.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:03.182 "is_configured": true, 00:34:03.182 "data_offset": 2048, 00:34:03.182 "data_size": 63488 00:34:03.182 }, 00:34:03.182 { 00:34:03.182 "name": null, 00:34:03.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:03.182 "is_configured": false, 00:34:03.182 "data_offset": 2048, 00:34:03.182 "data_size": 63488 00:34:03.182 }, 00:34:03.182 { 00:34:03.182 "name": null, 00:34:03.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:03.182 "is_configured": false, 00:34:03.182 "data_offset": 2048, 00:34:03.182 "data_size": 63488 00:34:03.182 } 00:34:03.182 ] 00:34:03.182 }' 00:34:03.182 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:03.182 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:03.441 [2024-12-09 23:15:43.978374] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:03.441 [2024-12-09 23:15:43.978464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:03.441 [2024-12-09 23:15:43.978493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:34:03.441 [2024-12-09 23:15:43.978506] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:03.441 [2024-12-09 23:15:43.978986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:03.441 [2024-12-09 23:15:43.979006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:03.441 [2024-12-09 23:15:43.979101] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:03.441 [2024-12-09 23:15:43.979132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:03.441 pt2 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:03.441 [2024-12-09 23:15:43.986373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:03.441 23:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.441 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.441 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:03.441 "name": "raid_bdev1", 00:34:03.441 "uuid": "11e2aeb3-04b2-4ca5-abee-092c48a8b810", 00:34:03.441 "strip_size_kb": 64, 00:34:03.441 "state": "configuring", 00:34:03.441 "raid_level": "concat", 00:34:03.441 "superblock": true, 00:34:03.441 "num_base_bdevs": 3, 00:34:03.441 "num_base_bdevs_discovered": 1, 00:34:03.441 "num_base_bdevs_operational": 3, 00:34:03.441 "base_bdevs_list": [ 00:34:03.441 { 00:34:03.442 "name": "pt1", 00:34:03.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:03.442 "is_configured": true, 00:34:03.442 "data_offset": 2048, 00:34:03.442 "data_size": 63488 00:34:03.442 }, 00:34:03.442 { 00:34:03.442 "name": null, 00:34:03.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:03.442 "is_configured": false, 00:34:03.442 "data_offset": 0, 00:34:03.442 "data_size": 63488 00:34:03.442 }, 00:34:03.442 { 00:34:03.442 "name": null, 00:34:03.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:03.442 "is_configured": false, 00:34:03.442 "data_offset": 2048, 00:34:03.442 "data_size": 63488 00:34:03.442 } 00:34:03.442 ] 00:34:03.442 }' 00:34:03.442 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:03.442 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.008 [2024-12-09 23:15:44.422337] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:04.008 [2024-12-09 23:15:44.422432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.008 [2024-12-09 23:15:44.422457] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:34:04.008 [2024-12-09 23:15:44.422472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.008 [2024-12-09 23:15:44.422991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.008 [2024-12-09 23:15:44.423017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:04.008 [2024-12-09 23:15:44.423126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:04.008 [2024-12-09 23:15:44.423159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:04.008 pt2 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.008 [2024-12-09 23:15:44.434339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:04.008 [2024-12-09 23:15:44.434422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.008 [2024-12-09 23:15:44.434445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:04.008 [2024-12-09 23:15:44.434460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.008 [2024-12-09 23:15:44.434955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.008 [2024-12-09 23:15:44.434992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:04.008 [2024-12-09 23:15:44.435077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:04.008 [2024-12-09 23:15:44.435108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:04.008 [2024-12-09 23:15:44.435256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:04.008 [2024-12-09 23:15:44.435271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:04.008 [2024-12-09 23:15:44.435588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:04.008 [2024-12-09 23:15:44.435757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:04.008 [2024-12-09 23:15:44.435770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:04.008 [2024-12-09 23:15:44.435939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:04.008 pt3 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:04.008 "name": "raid_bdev1", 00:34:04.008 "uuid": "11e2aeb3-04b2-4ca5-abee-092c48a8b810", 00:34:04.008 "strip_size_kb": 64, 00:34:04.008 "state": "online", 00:34:04.008 "raid_level": "concat", 00:34:04.008 "superblock": true, 00:34:04.008 "num_base_bdevs": 3, 00:34:04.008 "num_base_bdevs_discovered": 3, 00:34:04.008 "num_base_bdevs_operational": 3, 00:34:04.008 "base_bdevs_list": [ 00:34:04.008 { 00:34:04.008 "name": "pt1", 00:34:04.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:04.008 "is_configured": true, 00:34:04.008 "data_offset": 2048, 00:34:04.008 "data_size": 63488 00:34:04.008 }, 00:34:04.008 { 00:34:04.008 "name": "pt2", 00:34:04.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:04.008 "is_configured": true, 00:34:04.008 "data_offset": 2048, 00:34:04.008 "data_size": 63488 00:34:04.008 }, 00:34:04.008 { 00:34:04.008 "name": "pt3", 00:34:04.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:04.008 "is_configured": true, 00:34:04.008 "data_offset": 2048, 00:34:04.008 "data_size": 63488 00:34:04.008 } 00:34:04.008 ] 00:34:04.008 }' 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:04.008 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.267 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:04.267 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:04.267 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:04.267 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:04.267 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:04.267 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:04.526 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:04.526 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.526 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:04.526 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.526 [2024-12-09 23:15:44.910307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:04.526 23:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.526 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:04.526 "name": "raid_bdev1", 00:34:04.526 "aliases": [ 00:34:04.526 "11e2aeb3-04b2-4ca5-abee-092c48a8b810" 00:34:04.526 ], 00:34:04.526 "product_name": "Raid Volume", 00:34:04.526 "block_size": 512, 00:34:04.526 "num_blocks": 190464, 00:34:04.526 "uuid": "11e2aeb3-04b2-4ca5-abee-092c48a8b810", 00:34:04.526 "assigned_rate_limits": { 00:34:04.526 "rw_ios_per_sec": 0, 00:34:04.526 "rw_mbytes_per_sec": 0, 00:34:04.526 "r_mbytes_per_sec": 0, 00:34:04.526 "w_mbytes_per_sec": 0 00:34:04.526 }, 00:34:04.526 "claimed": false, 00:34:04.526 "zoned": false, 00:34:04.526 "supported_io_types": { 00:34:04.526 "read": true, 00:34:04.526 "write": true, 00:34:04.526 "unmap": true, 00:34:04.526 "flush": true, 00:34:04.526 "reset": true, 00:34:04.526 "nvme_admin": false, 00:34:04.526 "nvme_io": false, 00:34:04.526 "nvme_io_md": false, 00:34:04.526 "write_zeroes": true, 00:34:04.526 "zcopy": false, 00:34:04.526 "get_zone_info": false, 00:34:04.526 "zone_management": false, 00:34:04.526 "zone_append": false, 00:34:04.526 "compare": false, 00:34:04.526 "compare_and_write": false, 00:34:04.526 "abort": false, 00:34:04.526 "seek_hole": false, 00:34:04.526 "seek_data": false, 00:34:04.526 "copy": false, 00:34:04.526 "nvme_iov_md": false 00:34:04.526 }, 00:34:04.526 "memory_domains": [ 00:34:04.526 { 00:34:04.526 "dma_device_id": "system", 00:34:04.526 "dma_device_type": 1 00:34:04.526 }, 00:34:04.526 { 00:34:04.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:04.526 "dma_device_type": 2 00:34:04.526 }, 00:34:04.526 { 00:34:04.526 "dma_device_id": "system", 00:34:04.526 "dma_device_type": 1 00:34:04.526 }, 00:34:04.526 { 00:34:04.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:04.526 "dma_device_type": 2 00:34:04.526 }, 00:34:04.526 { 00:34:04.526 "dma_device_id": "system", 00:34:04.526 "dma_device_type": 1 00:34:04.526 }, 00:34:04.526 { 00:34:04.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:04.526 "dma_device_type": 2 00:34:04.526 } 00:34:04.526 ], 00:34:04.526 "driver_specific": { 00:34:04.526 "raid": { 00:34:04.526 "uuid": "11e2aeb3-04b2-4ca5-abee-092c48a8b810", 00:34:04.526 "strip_size_kb": 64, 00:34:04.526 "state": "online", 00:34:04.526 "raid_level": "concat", 00:34:04.526 "superblock": true, 00:34:04.526 "num_base_bdevs": 3, 00:34:04.526 "num_base_bdevs_discovered": 3, 00:34:04.526 "num_base_bdevs_operational": 3, 00:34:04.526 "base_bdevs_list": [ 00:34:04.526 { 00:34:04.526 "name": "pt1", 00:34:04.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:04.526 "is_configured": true, 00:34:04.526 "data_offset": 2048, 00:34:04.526 "data_size": 63488 00:34:04.526 }, 00:34:04.526 { 00:34:04.526 "name": "pt2", 00:34:04.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:04.526 "is_configured": true, 00:34:04.526 "data_offset": 2048, 00:34:04.526 "data_size": 63488 00:34:04.526 }, 00:34:04.526 { 00:34:04.526 "name": "pt3", 00:34:04.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:04.526 "is_configured": true, 00:34:04.526 "data_offset": 2048, 00:34:04.526 "data_size": 63488 00:34:04.526 } 00:34:04.526 ] 00:34:04.526 } 00:34:04.526 } 00:34:04.526 }' 00:34:04.526 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:04.526 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:04.526 pt2 00:34:04.526 pt3' 00:34:04.526 23:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:04.526 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.785 [2024-12-09 23:15:45.161896] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 11e2aeb3-04b2-4ca5-abee-092c48a8b810 '!=' 11e2aeb3-04b2-4ca5-abee-092c48a8b810 ']' 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66731 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66731 ']' 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66731 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66731 00:34:04.785 killing process with pid 66731 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66731' 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66731 00:34:04.785 [2024-12-09 23:15:45.240761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:04.785 [2024-12-09 23:15:45.240870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:04.785 23:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66731 00:34:04.785 [2024-12-09 23:15:45.240932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:04.785 [2024-12-09 23:15:45.240947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:05.044 [2024-12-09 23:15:45.558266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:06.426 23:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:34:06.426 00:34:06.426 real 0m5.278s 00:34:06.426 user 0m7.476s 00:34:06.426 sys 0m1.011s 00:34:06.426 23:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.426 23:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.426 ************************************ 00:34:06.426 END TEST raid_superblock_test 00:34:06.426 ************************************ 00:34:06.426 23:15:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:34:06.426 23:15:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:06.426 23:15:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.426 23:15:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:06.426 ************************************ 00:34:06.426 START TEST raid_read_error_test 00:34:06.426 ************************************ 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gBiMiBRMai 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66990 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:34:06.426 23:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66990 00:34:06.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.427 23:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66990 ']' 00:34:06.427 23:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.427 23:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:06.427 23:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.427 23:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:06.427 23:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.427 [2024-12-09 23:15:46.948675] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:34:06.427 [2024-12-09 23:15:46.948996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66990 ] 00:34:06.689 [2024-12-09 23:15:47.136647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.689 [2024-12-09 23:15:47.269635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.947 [2024-12-09 23:15:47.479774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:06.947 [2024-12-09 23:15:47.479843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:07.206 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:07.206 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:34:07.206 23:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:07.206 23:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:07.206 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.206 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.465 BaseBdev1_malloc 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.465 true 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.465 [2024-12-09 23:15:47.901590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:34:07.465 [2024-12-09 23:15:47.901661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:07.465 [2024-12-09 23:15:47.901688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:07.465 [2024-12-09 23:15:47.901704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:07.465 [2024-12-09 23:15:47.904336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:07.465 [2024-12-09 23:15:47.904387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:07.465 BaseBdev1 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.465 BaseBdev2_malloc 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.465 true 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.465 [2024-12-09 23:15:47.965070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:34:07.465 [2024-12-09 23:15:47.965133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:07.465 [2024-12-09 23:15:47.965155] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:07.465 [2024-12-09 23:15:47.965169] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:07.465 [2024-12-09 23:15:47.967729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:07.465 [2024-12-09 23:15:47.967943] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:07.465 BaseBdev2 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.465 23:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.465 BaseBdev3_malloc 00:34:07.465 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.465 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:34:07.465 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.465 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.465 true 00:34:07.465 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.465 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:34:07.465 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.465 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.465 [2024-12-09 23:15:48.044834] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:34:07.465 [2024-12-09 23:15:48.044896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:07.465 [2024-12-09 23:15:48.044918] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:07.465 [2024-12-09 23:15:48.044933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:07.465 [2024-12-09 23:15:48.047513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:07.465 [2024-12-09 23:15:48.047677] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:07.465 BaseBdev3 00:34:07.465 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.466 [2024-12-09 23:15:48.052911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:07.466 [2024-12-09 23:15:48.055093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:07.466 [2024-12-09 23:15:48.055294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:07.466 [2024-12-09 23:15:48.055537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:07.466 [2024-12-09 23:15:48.055553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:07.466 [2024-12-09 23:15:48.055852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:34:07.466 [2024-12-09 23:15:48.056014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:07.466 [2024-12-09 23:15:48.056031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:07.466 [2024-12-09 23:15:48.056196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.466 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.725 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:07.725 "name": "raid_bdev1", 00:34:07.725 "uuid": "41c179c6-2273-4322-a176-c2df6c9ce655", 00:34:07.725 "strip_size_kb": 64, 00:34:07.725 "state": "online", 00:34:07.725 "raid_level": "concat", 00:34:07.725 "superblock": true, 00:34:07.725 "num_base_bdevs": 3, 00:34:07.725 "num_base_bdevs_discovered": 3, 00:34:07.725 "num_base_bdevs_operational": 3, 00:34:07.725 "base_bdevs_list": [ 00:34:07.725 { 00:34:07.725 "name": "BaseBdev1", 00:34:07.725 "uuid": "7369ca6a-4892-5400-bab9-57ba52914573", 00:34:07.725 "is_configured": true, 00:34:07.725 "data_offset": 2048, 00:34:07.725 "data_size": 63488 00:34:07.725 }, 00:34:07.725 { 00:34:07.725 "name": "BaseBdev2", 00:34:07.725 "uuid": "9af77ac7-132f-5d5a-a648-0cf7efad1e9d", 00:34:07.725 "is_configured": true, 00:34:07.725 "data_offset": 2048, 00:34:07.725 "data_size": 63488 00:34:07.725 }, 00:34:07.725 { 00:34:07.725 "name": "BaseBdev3", 00:34:07.725 "uuid": "a26eecc6-0236-5594-8ab2-fba721637733", 00:34:07.725 "is_configured": true, 00:34:07.725 "data_offset": 2048, 00:34:07.725 "data_size": 63488 00:34:07.725 } 00:34:07.725 ] 00:34:07.725 }' 00:34:07.725 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:07.725 23:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:08.015 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:34:08.015 23:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:08.272 [2024-12-09 23:15:48.657543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:09.209 "name": "raid_bdev1", 00:34:09.209 "uuid": "41c179c6-2273-4322-a176-c2df6c9ce655", 00:34:09.209 "strip_size_kb": 64, 00:34:09.209 "state": "online", 00:34:09.209 "raid_level": "concat", 00:34:09.209 "superblock": true, 00:34:09.209 "num_base_bdevs": 3, 00:34:09.209 "num_base_bdevs_discovered": 3, 00:34:09.209 "num_base_bdevs_operational": 3, 00:34:09.209 "base_bdevs_list": [ 00:34:09.209 { 00:34:09.209 "name": "BaseBdev1", 00:34:09.209 "uuid": "7369ca6a-4892-5400-bab9-57ba52914573", 00:34:09.209 "is_configured": true, 00:34:09.209 "data_offset": 2048, 00:34:09.209 "data_size": 63488 00:34:09.209 }, 00:34:09.209 { 00:34:09.209 "name": "BaseBdev2", 00:34:09.209 "uuid": "9af77ac7-132f-5d5a-a648-0cf7efad1e9d", 00:34:09.209 "is_configured": true, 00:34:09.209 "data_offset": 2048, 00:34:09.209 "data_size": 63488 00:34:09.209 }, 00:34:09.209 { 00:34:09.209 "name": "BaseBdev3", 00:34:09.209 "uuid": "a26eecc6-0236-5594-8ab2-fba721637733", 00:34:09.209 "is_configured": true, 00:34:09.209 "data_offset": 2048, 00:34:09.209 "data_size": 63488 00:34:09.209 } 00:34:09.209 ] 00:34:09.209 }' 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:09.209 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.468 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:09.468 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.468 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:09.468 [2024-12-09 23:15:49.984714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:09.468 [2024-12-09 23:15:49.984747] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:09.468 [2024-12-09 23:15:49.987648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:09.468 [2024-12-09 23:15:49.987702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:09.468 [2024-12-09 23:15:49.987746] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:09.468 [2024-12-09 23:15:49.987757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:09.468 { 00:34:09.468 "results": [ 00:34:09.468 { 00:34:09.468 "job": "raid_bdev1", 00:34:09.468 "core_mask": "0x1", 00:34:09.468 "workload": "randrw", 00:34:09.468 "percentage": 50, 00:34:09.468 "status": "finished", 00:34:09.468 "queue_depth": 1, 00:34:09.468 "io_size": 131072, 00:34:09.468 "runtime": 1.327017, 00:34:09.468 "iops": 15498.6710795717, 00:34:09.468 "mibps": 1937.3338849464626, 00:34:09.468 "io_failed": 1, 00:34:09.468 "io_timeout": 0, 00:34:09.468 "avg_latency_us": 89.07760610704193, 00:34:09.468 "min_latency_us": 27.347791164658634, 00:34:09.468 "max_latency_us": 1506.8016064257029 00:34:09.468 } 00:34:09.468 ], 00:34:09.468 "core_count": 1 00:34:09.468 } 00:34:09.468 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.468 23:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66990 00:34:09.468 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66990 ']' 00:34:09.468 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66990 00:34:09.468 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:34:09.468 23:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.468 23:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66990 00:34:09.468 killing process with pid 66990 00:34:09.468 23:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:09.468 23:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:09.468 23:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66990' 00:34:09.468 23:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66990 00:34:09.468 [2024-12-09 23:15:50.043182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:09.468 23:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66990 00:34:09.727 [2024-12-09 23:15:50.277477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:11.105 23:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gBiMiBRMai 00:34:11.105 23:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:34:11.105 23:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:34:11.105 23:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:34:11.105 23:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:34:11.105 23:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:11.105 23:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:11.105 23:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:34:11.105 00:34:11.105 real 0m4.675s 00:34:11.105 user 0m5.605s 00:34:11.105 sys 0m0.641s 00:34:11.105 23:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.105 23:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.105 ************************************ 00:34:11.105 END TEST raid_read_error_test 00:34:11.105 ************************************ 00:34:11.105 23:15:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:34:11.105 23:15:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:11.105 23:15:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.105 23:15:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:11.105 ************************************ 00:34:11.105 START TEST raid_write_error_test 00:34:11.105 ************************************ 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q3cLsncVnx 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67135 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67135 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67135 ']' 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.105 23:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.105 [2024-12-09 23:15:51.701047] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:34:11.105 [2024-12-09 23:15:51.701380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67135 ] 00:34:11.363 [2024-12-09 23:15:51.888543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.622 [2024-12-09 23:15:52.006327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.622 [2024-12-09 23:15:52.225208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:11.622 [2024-12-09 23:15:52.225475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 BaseBdev1_malloc 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 true 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 [2024-12-09 23:15:52.615591] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:34:12.190 [2024-12-09 23:15:52.615658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:12.190 [2024-12-09 23:15:52.615685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:12.190 [2024-12-09 23:15:52.615699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:12.190 [2024-12-09 23:15:52.618138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:12.190 [2024-12-09 23:15:52.618196] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:12.190 BaseBdev1 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 BaseBdev2_malloc 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 true 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 [2024-12-09 23:15:52.685758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:34:12.190 [2024-12-09 23:15:52.685959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:12.190 [2024-12-09 23:15:52.685991] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:12.190 [2024-12-09 23:15:52.686007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:12.190 [2024-12-09 23:15:52.688639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:12.190 [2024-12-09 23:15:52.688682] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:12.190 BaseBdev2 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 BaseBdev3_malloc 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 true 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 [2024-12-09 23:15:52.767087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:34:12.190 [2024-12-09 23:15:52.767151] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:12.190 [2024-12-09 23:15:52.767174] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:12.190 [2024-12-09 23:15:52.767190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:12.190 [2024-12-09 23:15:52.769769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:12.190 [2024-12-09 23:15:52.769814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:12.190 BaseBdev3 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 [2024-12-09 23:15:52.779158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:12.190 [2024-12-09 23:15:52.781251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:12.190 [2024-12-09 23:15:52.781325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:12.190 [2024-12-09 23:15:52.781539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:12.190 [2024-12-09 23:15:52.781554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:12.190 [2024-12-09 23:15:52.781820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:34:12.190 [2024-12-09 23:15:52.782001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:12.190 [2024-12-09 23:15:52.782018] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:12.190 [2024-12-09 23:15:52.782169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.190 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.449 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:12.449 "name": "raid_bdev1", 00:34:12.449 "uuid": "b1e0c080-a1cf-49aa-bb88-f961e1c6e62a", 00:34:12.449 "strip_size_kb": 64, 00:34:12.449 "state": "online", 00:34:12.449 "raid_level": "concat", 00:34:12.449 "superblock": true, 00:34:12.449 "num_base_bdevs": 3, 00:34:12.449 "num_base_bdevs_discovered": 3, 00:34:12.449 "num_base_bdevs_operational": 3, 00:34:12.449 "base_bdevs_list": [ 00:34:12.449 { 00:34:12.449 "name": "BaseBdev1", 00:34:12.449 "uuid": "f4e293ab-308e-5c8a-bfde-28c04a939c30", 00:34:12.449 "is_configured": true, 00:34:12.449 "data_offset": 2048, 00:34:12.449 "data_size": 63488 00:34:12.449 }, 00:34:12.449 { 00:34:12.449 "name": "BaseBdev2", 00:34:12.449 "uuid": "7f440333-4d96-5980-9d07-c6cb744d9bcd", 00:34:12.449 "is_configured": true, 00:34:12.449 "data_offset": 2048, 00:34:12.449 "data_size": 63488 00:34:12.449 }, 00:34:12.449 { 00:34:12.449 "name": "BaseBdev3", 00:34:12.449 "uuid": "827c8f00-e538-5ce2-8aea-e2bf48fa96de", 00:34:12.449 "is_configured": true, 00:34:12.449 "data_offset": 2048, 00:34:12.449 "data_size": 63488 00:34:12.449 } 00:34:12.449 ] 00:34:12.449 }' 00:34:12.449 23:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:12.449 23:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.706 23:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:34:12.706 23:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:12.707 [2024-12-09 23:15:53.275786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:34:13.640 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:34:13.640 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.640 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.640 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.640 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:34:13.640 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:13.641 "name": "raid_bdev1", 00:34:13.641 "uuid": "b1e0c080-a1cf-49aa-bb88-f961e1c6e62a", 00:34:13.641 "strip_size_kb": 64, 00:34:13.641 "state": "online", 00:34:13.641 "raid_level": "concat", 00:34:13.641 "superblock": true, 00:34:13.641 "num_base_bdevs": 3, 00:34:13.641 "num_base_bdevs_discovered": 3, 00:34:13.641 "num_base_bdevs_operational": 3, 00:34:13.641 "base_bdevs_list": [ 00:34:13.641 { 00:34:13.641 "name": "BaseBdev1", 00:34:13.641 "uuid": "f4e293ab-308e-5c8a-bfde-28c04a939c30", 00:34:13.641 "is_configured": true, 00:34:13.641 "data_offset": 2048, 00:34:13.641 "data_size": 63488 00:34:13.641 }, 00:34:13.641 { 00:34:13.641 "name": "BaseBdev2", 00:34:13.641 "uuid": "7f440333-4d96-5980-9d07-c6cb744d9bcd", 00:34:13.641 "is_configured": true, 00:34:13.641 "data_offset": 2048, 00:34:13.641 "data_size": 63488 00:34:13.641 }, 00:34:13.641 { 00:34:13.641 "name": "BaseBdev3", 00:34:13.641 "uuid": "827c8f00-e538-5ce2-8aea-e2bf48fa96de", 00:34:13.641 "is_configured": true, 00:34:13.641 "data_offset": 2048, 00:34:13.641 "data_size": 63488 00:34:13.641 } 00:34:13.641 ] 00:34:13.641 }' 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:13.641 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.207 [2024-12-09 23:15:54.640845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:14.207 [2024-12-09 23:15:54.640885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:14.207 [2024-12-09 23:15:54.643841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:14.207 [2024-12-09 23:15:54.643896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:14.207 [2024-12-09 23:15:54.643938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:14.207 [2024-12-09 23:15:54.643955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:14.207 { 00:34:14.207 "results": [ 00:34:14.207 { 00:34:14.207 "job": "raid_bdev1", 00:34:14.207 "core_mask": "0x1", 00:34:14.207 "workload": "randrw", 00:34:14.207 "percentage": 50, 00:34:14.207 "status": "finished", 00:34:14.207 "queue_depth": 1, 00:34:14.207 "io_size": 131072, 00:34:14.207 "runtime": 1.365226, 00:34:14.207 "iops": 14826.849181014719, 00:34:14.207 "mibps": 1853.3561476268399, 00:34:14.207 "io_failed": 1, 00:34:14.207 "io_timeout": 0, 00:34:14.207 "avg_latency_us": 93.01717751805523, 00:34:14.207 "min_latency_us": 27.759036144578314, 00:34:14.207 "max_latency_us": 1526.5413654618474 00:34:14.207 } 00:34:14.207 ], 00:34:14.207 "core_count": 1 00:34:14.207 } 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67135 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67135 ']' 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67135 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67135 00:34:14.207 killing process with pid 67135 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67135' 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67135 00:34:14.207 [2024-12-09 23:15:54.693285] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:14.207 23:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67135 00:34:14.464 [2024-12-09 23:15:54.945976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:15.836 23:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q3cLsncVnx 00:34:15.836 23:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:34:15.836 23:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:34:15.836 23:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:34:15.836 23:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:34:15.836 23:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:15.836 23:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:15.836 23:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:34:15.836 00:34:15.836 real 0m4.649s 00:34:15.836 user 0m5.441s 00:34:15.836 sys 0m0.628s 00:34:15.836 23:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:15.836 ************************************ 00:34:15.836 END TEST raid_write_error_test 00:34:15.836 ************************************ 00:34:15.836 23:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.836 23:15:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:34:15.836 23:15:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:34:15.836 23:15:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:15.836 23:15:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:15.836 23:15:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:15.836 ************************************ 00:34:15.836 START TEST raid_state_function_test 00:34:15.836 ************************************ 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:34:15.836 Process raid pid: 67279 00:34:15.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67279 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67279' 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67279 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67279 ']' 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:15.836 23:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.836 [2024-12-09 23:15:56.417233] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:34:15.836 [2024-12-09 23:15:56.417633] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.093 [2024-12-09 23:15:56.609444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.351 [2024-12-09 23:15:56.742289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.351 [2024-12-09 23:15:56.975632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:16.351 [2024-12-09 23:15:56.975863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.934 [2024-12-09 23:15:57.257837] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:16.934 [2024-12-09 23:15:57.257907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:16.934 [2024-12-09 23:15:57.257921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:16.934 [2024-12-09 23:15:57.257935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:16.934 [2024-12-09 23:15:57.257943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:16.934 [2024-12-09 23:15:57.257956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.934 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:16.934 "name": "Existed_Raid", 00:34:16.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.934 "strip_size_kb": 0, 00:34:16.934 "state": "configuring", 00:34:16.934 "raid_level": "raid1", 00:34:16.934 "superblock": false, 00:34:16.934 "num_base_bdevs": 3, 00:34:16.934 "num_base_bdevs_discovered": 0, 00:34:16.934 "num_base_bdevs_operational": 3, 00:34:16.934 "base_bdevs_list": [ 00:34:16.934 { 00:34:16.934 "name": "BaseBdev1", 00:34:16.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.935 "is_configured": false, 00:34:16.935 "data_offset": 0, 00:34:16.935 "data_size": 0 00:34:16.935 }, 00:34:16.935 { 00:34:16.935 "name": "BaseBdev2", 00:34:16.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.935 "is_configured": false, 00:34:16.935 "data_offset": 0, 00:34:16.935 "data_size": 0 00:34:16.935 }, 00:34:16.935 { 00:34:16.935 "name": "BaseBdev3", 00:34:16.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.935 "is_configured": false, 00:34:16.935 "data_offset": 0, 00:34:16.935 "data_size": 0 00:34:16.935 } 00:34:16.935 ] 00:34:16.935 }' 00:34:16.935 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:16.935 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.194 [2024-12-09 23:15:57.717586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:17.194 [2024-12-09 23:15:57.717625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.194 [2024-12-09 23:15:57.725558] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:17.194 [2024-12-09 23:15:57.725611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:17.194 [2024-12-09 23:15:57.725622] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:17.194 [2024-12-09 23:15:57.725636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:17.194 [2024-12-09 23:15:57.725645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:17.194 [2024-12-09 23:15:57.725658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.194 [2024-12-09 23:15:57.774210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:17.194 BaseBdev1 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:17.194 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.195 [ 00:34:17.195 { 00:34:17.195 "name": "BaseBdev1", 00:34:17.195 "aliases": [ 00:34:17.195 "36c4df72-0a32-493e-a729-e6eb811a8b89" 00:34:17.195 ], 00:34:17.195 "product_name": "Malloc disk", 00:34:17.195 "block_size": 512, 00:34:17.195 "num_blocks": 65536, 00:34:17.195 "uuid": "36c4df72-0a32-493e-a729-e6eb811a8b89", 00:34:17.195 "assigned_rate_limits": { 00:34:17.195 "rw_ios_per_sec": 0, 00:34:17.195 "rw_mbytes_per_sec": 0, 00:34:17.195 "r_mbytes_per_sec": 0, 00:34:17.195 "w_mbytes_per_sec": 0 00:34:17.195 }, 00:34:17.195 "claimed": true, 00:34:17.195 "claim_type": "exclusive_write", 00:34:17.195 "zoned": false, 00:34:17.195 "supported_io_types": { 00:34:17.195 "read": true, 00:34:17.195 "write": true, 00:34:17.195 "unmap": true, 00:34:17.195 "flush": true, 00:34:17.195 "reset": true, 00:34:17.195 "nvme_admin": false, 00:34:17.195 "nvme_io": false, 00:34:17.195 "nvme_io_md": false, 00:34:17.195 "write_zeroes": true, 00:34:17.195 "zcopy": true, 00:34:17.195 "get_zone_info": false, 00:34:17.195 "zone_management": false, 00:34:17.195 "zone_append": false, 00:34:17.195 "compare": false, 00:34:17.195 "compare_and_write": false, 00:34:17.195 "abort": true, 00:34:17.195 "seek_hole": false, 00:34:17.195 "seek_data": false, 00:34:17.195 "copy": true, 00:34:17.195 "nvme_iov_md": false 00:34:17.195 }, 00:34:17.195 "memory_domains": [ 00:34:17.195 { 00:34:17.195 "dma_device_id": "system", 00:34:17.195 "dma_device_type": 1 00:34:17.195 }, 00:34:17.195 { 00:34:17.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.195 "dma_device_type": 2 00:34:17.195 } 00:34:17.195 ], 00:34:17.195 "driver_specific": {} 00:34:17.195 } 00:34:17.195 ] 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:17.195 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.456 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.456 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.456 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.456 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:17.456 "name": "Existed_Raid", 00:34:17.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.456 "strip_size_kb": 0, 00:34:17.456 "state": "configuring", 00:34:17.456 "raid_level": "raid1", 00:34:17.456 "superblock": false, 00:34:17.456 "num_base_bdevs": 3, 00:34:17.456 "num_base_bdevs_discovered": 1, 00:34:17.456 "num_base_bdevs_operational": 3, 00:34:17.456 "base_bdevs_list": [ 00:34:17.456 { 00:34:17.456 "name": "BaseBdev1", 00:34:17.456 "uuid": "36c4df72-0a32-493e-a729-e6eb811a8b89", 00:34:17.456 "is_configured": true, 00:34:17.456 "data_offset": 0, 00:34:17.456 "data_size": 65536 00:34:17.456 }, 00:34:17.456 { 00:34:17.456 "name": "BaseBdev2", 00:34:17.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.456 "is_configured": false, 00:34:17.456 "data_offset": 0, 00:34:17.456 "data_size": 0 00:34:17.456 }, 00:34:17.456 { 00:34:17.456 "name": "BaseBdev3", 00:34:17.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.456 "is_configured": false, 00:34:17.456 "data_offset": 0, 00:34:17.456 "data_size": 0 00:34:17.456 } 00:34:17.456 ] 00:34:17.456 }' 00:34:17.456 23:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:17.456 23:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.717 [2024-12-09 23:15:58.265568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:17.717 [2024-12-09 23:15:58.265768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.717 [2024-12-09 23:15:58.277598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:17.717 [2024-12-09 23:15:58.279864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:17.717 [2024-12-09 23:15:58.280034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:17.717 [2024-12-09 23:15:58.280057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:17.717 [2024-12-09 23:15:58.280072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:17.717 "name": "Existed_Raid", 00:34:17.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.717 "strip_size_kb": 0, 00:34:17.717 "state": "configuring", 00:34:17.717 "raid_level": "raid1", 00:34:17.717 "superblock": false, 00:34:17.717 "num_base_bdevs": 3, 00:34:17.717 "num_base_bdevs_discovered": 1, 00:34:17.717 "num_base_bdevs_operational": 3, 00:34:17.717 "base_bdevs_list": [ 00:34:17.717 { 00:34:17.717 "name": "BaseBdev1", 00:34:17.717 "uuid": "36c4df72-0a32-493e-a729-e6eb811a8b89", 00:34:17.717 "is_configured": true, 00:34:17.717 "data_offset": 0, 00:34:17.717 "data_size": 65536 00:34:17.717 }, 00:34:17.717 { 00:34:17.717 "name": "BaseBdev2", 00:34:17.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.717 "is_configured": false, 00:34:17.717 "data_offset": 0, 00:34:17.717 "data_size": 0 00:34:17.717 }, 00:34:17.717 { 00:34:17.717 "name": "BaseBdev3", 00:34:17.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.717 "is_configured": false, 00:34:17.717 "data_offset": 0, 00:34:17.717 "data_size": 0 00:34:17.717 } 00:34:17.717 ] 00:34:17.717 }' 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:17.717 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.281 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:18.281 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.281 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.281 [2024-12-09 23:15:58.775282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:18.281 BaseBdev2 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.282 [ 00:34:18.282 { 00:34:18.282 "name": "BaseBdev2", 00:34:18.282 "aliases": [ 00:34:18.282 "95ab08cf-ee3b-43b8-b8c8-d16893421c4d" 00:34:18.282 ], 00:34:18.282 "product_name": "Malloc disk", 00:34:18.282 "block_size": 512, 00:34:18.282 "num_blocks": 65536, 00:34:18.282 "uuid": "95ab08cf-ee3b-43b8-b8c8-d16893421c4d", 00:34:18.282 "assigned_rate_limits": { 00:34:18.282 "rw_ios_per_sec": 0, 00:34:18.282 "rw_mbytes_per_sec": 0, 00:34:18.282 "r_mbytes_per_sec": 0, 00:34:18.282 "w_mbytes_per_sec": 0 00:34:18.282 }, 00:34:18.282 "claimed": true, 00:34:18.282 "claim_type": "exclusive_write", 00:34:18.282 "zoned": false, 00:34:18.282 "supported_io_types": { 00:34:18.282 "read": true, 00:34:18.282 "write": true, 00:34:18.282 "unmap": true, 00:34:18.282 "flush": true, 00:34:18.282 "reset": true, 00:34:18.282 "nvme_admin": false, 00:34:18.282 "nvme_io": false, 00:34:18.282 "nvme_io_md": false, 00:34:18.282 "write_zeroes": true, 00:34:18.282 "zcopy": true, 00:34:18.282 "get_zone_info": false, 00:34:18.282 "zone_management": false, 00:34:18.282 "zone_append": false, 00:34:18.282 "compare": false, 00:34:18.282 "compare_and_write": false, 00:34:18.282 "abort": true, 00:34:18.282 "seek_hole": false, 00:34:18.282 "seek_data": false, 00:34:18.282 "copy": true, 00:34:18.282 "nvme_iov_md": false 00:34:18.282 }, 00:34:18.282 "memory_domains": [ 00:34:18.282 { 00:34:18.282 "dma_device_id": "system", 00:34:18.282 "dma_device_type": 1 00:34:18.282 }, 00:34:18.282 { 00:34:18.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:18.282 "dma_device_type": 2 00:34:18.282 } 00:34:18.282 ], 00:34:18.282 "driver_specific": {} 00:34:18.282 } 00:34:18.282 ] 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:18.282 "name": "Existed_Raid", 00:34:18.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.282 "strip_size_kb": 0, 00:34:18.282 "state": "configuring", 00:34:18.282 "raid_level": "raid1", 00:34:18.282 "superblock": false, 00:34:18.282 "num_base_bdevs": 3, 00:34:18.282 "num_base_bdevs_discovered": 2, 00:34:18.282 "num_base_bdevs_operational": 3, 00:34:18.282 "base_bdevs_list": [ 00:34:18.282 { 00:34:18.282 "name": "BaseBdev1", 00:34:18.282 "uuid": "36c4df72-0a32-493e-a729-e6eb811a8b89", 00:34:18.282 "is_configured": true, 00:34:18.282 "data_offset": 0, 00:34:18.282 "data_size": 65536 00:34:18.282 }, 00:34:18.282 { 00:34:18.282 "name": "BaseBdev2", 00:34:18.282 "uuid": "95ab08cf-ee3b-43b8-b8c8-d16893421c4d", 00:34:18.282 "is_configured": true, 00:34:18.282 "data_offset": 0, 00:34:18.282 "data_size": 65536 00:34:18.282 }, 00:34:18.282 { 00:34:18.282 "name": "BaseBdev3", 00:34:18.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.282 "is_configured": false, 00:34:18.282 "data_offset": 0, 00:34:18.282 "data_size": 0 00:34:18.282 } 00:34:18.282 ] 00:34:18.282 }' 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:18.282 23:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.849 [2024-12-09 23:15:59.341759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:18.849 [2024-12-09 23:15:59.342002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:18.849 [2024-12-09 23:15:59.342035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:34:18.849 [2024-12-09 23:15:59.342387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:18.849 [2024-12-09 23:15:59.342593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:18.849 [2024-12-09 23:15:59.342605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:18.849 [2024-12-09 23:15:59.342900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:18.849 BaseBdev3 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.849 [ 00:34:18.849 { 00:34:18.849 "name": "BaseBdev3", 00:34:18.849 "aliases": [ 00:34:18.849 "dde9905a-cc8c-4bfc-b537-74d93fa9565b" 00:34:18.849 ], 00:34:18.849 "product_name": "Malloc disk", 00:34:18.849 "block_size": 512, 00:34:18.849 "num_blocks": 65536, 00:34:18.849 "uuid": "dde9905a-cc8c-4bfc-b537-74d93fa9565b", 00:34:18.849 "assigned_rate_limits": { 00:34:18.849 "rw_ios_per_sec": 0, 00:34:18.849 "rw_mbytes_per_sec": 0, 00:34:18.849 "r_mbytes_per_sec": 0, 00:34:18.849 "w_mbytes_per_sec": 0 00:34:18.849 }, 00:34:18.849 "claimed": true, 00:34:18.849 "claim_type": "exclusive_write", 00:34:18.849 "zoned": false, 00:34:18.849 "supported_io_types": { 00:34:18.849 "read": true, 00:34:18.849 "write": true, 00:34:18.849 "unmap": true, 00:34:18.849 "flush": true, 00:34:18.849 "reset": true, 00:34:18.849 "nvme_admin": false, 00:34:18.849 "nvme_io": false, 00:34:18.849 "nvme_io_md": false, 00:34:18.849 "write_zeroes": true, 00:34:18.849 "zcopy": true, 00:34:18.849 "get_zone_info": false, 00:34:18.849 "zone_management": false, 00:34:18.849 "zone_append": false, 00:34:18.849 "compare": false, 00:34:18.849 "compare_and_write": false, 00:34:18.849 "abort": true, 00:34:18.849 "seek_hole": false, 00:34:18.849 "seek_data": false, 00:34:18.849 "copy": true, 00:34:18.849 "nvme_iov_md": false 00:34:18.849 }, 00:34:18.849 "memory_domains": [ 00:34:18.849 { 00:34:18.849 "dma_device_id": "system", 00:34:18.849 "dma_device_type": 1 00:34:18.849 }, 00:34:18.849 { 00:34:18.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:18.849 "dma_device_type": 2 00:34:18.849 } 00:34:18.849 ], 00:34:18.849 "driver_specific": {} 00:34:18.849 } 00:34:18.849 ] 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.849 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:18.849 "name": "Existed_Raid", 00:34:18.849 "uuid": "92b91694-965e-48c3-babe-d2b2a0f2cdf5", 00:34:18.849 "strip_size_kb": 0, 00:34:18.849 "state": "online", 00:34:18.849 "raid_level": "raid1", 00:34:18.849 "superblock": false, 00:34:18.849 "num_base_bdevs": 3, 00:34:18.849 "num_base_bdevs_discovered": 3, 00:34:18.849 "num_base_bdevs_operational": 3, 00:34:18.849 "base_bdevs_list": [ 00:34:18.849 { 00:34:18.849 "name": "BaseBdev1", 00:34:18.849 "uuid": "36c4df72-0a32-493e-a729-e6eb811a8b89", 00:34:18.849 "is_configured": true, 00:34:18.849 "data_offset": 0, 00:34:18.849 "data_size": 65536 00:34:18.849 }, 00:34:18.849 { 00:34:18.849 "name": "BaseBdev2", 00:34:18.849 "uuid": "95ab08cf-ee3b-43b8-b8c8-d16893421c4d", 00:34:18.849 "is_configured": true, 00:34:18.849 "data_offset": 0, 00:34:18.849 "data_size": 65536 00:34:18.849 }, 00:34:18.849 { 00:34:18.849 "name": "BaseBdev3", 00:34:18.849 "uuid": "dde9905a-cc8c-4bfc-b537-74d93fa9565b", 00:34:18.849 "is_configured": true, 00:34:18.849 "data_offset": 0, 00:34:18.850 "data_size": 65536 00:34:18.850 } 00:34:18.850 ] 00:34:18.850 }' 00:34:18.850 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:18.850 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.418 [2024-12-09 23:15:59.793692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:19.418 "name": "Existed_Raid", 00:34:19.418 "aliases": [ 00:34:19.418 "92b91694-965e-48c3-babe-d2b2a0f2cdf5" 00:34:19.418 ], 00:34:19.418 "product_name": "Raid Volume", 00:34:19.418 "block_size": 512, 00:34:19.418 "num_blocks": 65536, 00:34:19.418 "uuid": "92b91694-965e-48c3-babe-d2b2a0f2cdf5", 00:34:19.418 "assigned_rate_limits": { 00:34:19.418 "rw_ios_per_sec": 0, 00:34:19.418 "rw_mbytes_per_sec": 0, 00:34:19.418 "r_mbytes_per_sec": 0, 00:34:19.418 "w_mbytes_per_sec": 0 00:34:19.418 }, 00:34:19.418 "claimed": false, 00:34:19.418 "zoned": false, 00:34:19.418 "supported_io_types": { 00:34:19.418 "read": true, 00:34:19.418 "write": true, 00:34:19.418 "unmap": false, 00:34:19.418 "flush": false, 00:34:19.418 "reset": true, 00:34:19.418 "nvme_admin": false, 00:34:19.418 "nvme_io": false, 00:34:19.418 "nvme_io_md": false, 00:34:19.418 "write_zeroes": true, 00:34:19.418 "zcopy": false, 00:34:19.418 "get_zone_info": false, 00:34:19.418 "zone_management": false, 00:34:19.418 "zone_append": false, 00:34:19.418 "compare": false, 00:34:19.418 "compare_and_write": false, 00:34:19.418 "abort": false, 00:34:19.418 "seek_hole": false, 00:34:19.418 "seek_data": false, 00:34:19.418 "copy": false, 00:34:19.418 "nvme_iov_md": false 00:34:19.418 }, 00:34:19.418 "memory_domains": [ 00:34:19.418 { 00:34:19.418 "dma_device_id": "system", 00:34:19.418 "dma_device_type": 1 00:34:19.418 }, 00:34:19.418 { 00:34:19.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.418 "dma_device_type": 2 00:34:19.418 }, 00:34:19.418 { 00:34:19.418 "dma_device_id": "system", 00:34:19.418 "dma_device_type": 1 00:34:19.418 }, 00:34:19.418 { 00:34:19.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.418 "dma_device_type": 2 00:34:19.418 }, 00:34:19.418 { 00:34:19.418 "dma_device_id": "system", 00:34:19.418 "dma_device_type": 1 00:34:19.418 }, 00:34:19.418 { 00:34:19.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.418 "dma_device_type": 2 00:34:19.418 } 00:34:19.418 ], 00:34:19.418 "driver_specific": { 00:34:19.418 "raid": { 00:34:19.418 "uuid": "92b91694-965e-48c3-babe-d2b2a0f2cdf5", 00:34:19.418 "strip_size_kb": 0, 00:34:19.418 "state": "online", 00:34:19.418 "raid_level": "raid1", 00:34:19.418 "superblock": false, 00:34:19.418 "num_base_bdevs": 3, 00:34:19.418 "num_base_bdevs_discovered": 3, 00:34:19.418 "num_base_bdevs_operational": 3, 00:34:19.418 "base_bdevs_list": [ 00:34:19.418 { 00:34:19.418 "name": "BaseBdev1", 00:34:19.418 "uuid": "36c4df72-0a32-493e-a729-e6eb811a8b89", 00:34:19.418 "is_configured": true, 00:34:19.418 "data_offset": 0, 00:34:19.418 "data_size": 65536 00:34:19.418 }, 00:34:19.418 { 00:34:19.418 "name": "BaseBdev2", 00:34:19.418 "uuid": "95ab08cf-ee3b-43b8-b8c8-d16893421c4d", 00:34:19.418 "is_configured": true, 00:34:19.418 "data_offset": 0, 00:34:19.418 "data_size": 65536 00:34:19.418 }, 00:34:19.418 { 00:34:19.418 "name": "BaseBdev3", 00:34:19.418 "uuid": "dde9905a-cc8c-4bfc-b537-74d93fa9565b", 00:34:19.418 "is_configured": true, 00:34:19.418 "data_offset": 0, 00:34:19.418 "data_size": 65536 00:34:19.418 } 00:34:19.418 ] 00:34:19.418 } 00:34:19.418 } 00:34:19.418 }' 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:19.418 BaseBdev2 00:34:19.418 BaseBdev3' 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:19.418 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:19.419 23:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.419 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.419 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:19.419 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:19.419 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:19.419 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:19.419 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:19.419 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.419 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.676 [2024-12-09 23:16:00.077029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:19.676 "name": "Existed_Raid", 00:34:19.676 "uuid": "92b91694-965e-48c3-babe-d2b2a0f2cdf5", 00:34:19.676 "strip_size_kb": 0, 00:34:19.676 "state": "online", 00:34:19.676 "raid_level": "raid1", 00:34:19.676 "superblock": false, 00:34:19.676 "num_base_bdevs": 3, 00:34:19.676 "num_base_bdevs_discovered": 2, 00:34:19.676 "num_base_bdevs_operational": 2, 00:34:19.676 "base_bdevs_list": [ 00:34:19.676 { 00:34:19.676 "name": null, 00:34:19.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.676 "is_configured": false, 00:34:19.676 "data_offset": 0, 00:34:19.676 "data_size": 65536 00:34:19.676 }, 00:34:19.676 { 00:34:19.676 "name": "BaseBdev2", 00:34:19.676 "uuid": "95ab08cf-ee3b-43b8-b8c8-d16893421c4d", 00:34:19.676 "is_configured": true, 00:34:19.676 "data_offset": 0, 00:34:19.676 "data_size": 65536 00:34:19.676 }, 00:34:19.676 { 00:34:19.676 "name": "BaseBdev3", 00:34:19.676 "uuid": "dde9905a-cc8c-4bfc-b537-74d93fa9565b", 00:34:19.676 "is_configured": true, 00:34:19.676 "data_offset": 0, 00:34:19.676 "data_size": 65536 00:34:19.676 } 00:34:19.676 ] 00:34:19.676 }' 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:19.676 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.253 [2024-12-09 23:16:00.664593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.253 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.253 [2024-12-09 23:16:00.830271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:20.253 [2024-12-09 23:16:00.830553] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:20.531 [2024-12-09 23:16:00.935041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:20.531 [2024-12-09 23:16:00.935107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:20.531 [2024-12-09 23:16:00.935124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.531 23:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.531 BaseBdev2 00:34:20.531 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.531 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:34:20.531 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:20.531 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:20.531 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:20.531 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:20.531 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.532 [ 00:34:20.532 { 00:34:20.532 "name": "BaseBdev2", 00:34:20.532 "aliases": [ 00:34:20.532 "3208e41a-f43f-45ec-bff2-05f10555b765" 00:34:20.532 ], 00:34:20.532 "product_name": "Malloc disk", 00:34:20.532 "block_size": 512, 00:34:20.532 "num_blocks": 65536, 00:34:20.532 "uuid": "3208e41a-f43f-45ec-bff2-05f10555b765", 00:34:20.532 "assigned_rate_limits": { 00:34:20.532 "rw_ios_per_sec": 0, 00:34:20.532 "rw_mbytes_per_sec": 0, 00:34:20.532 "r_mbytes_per_sec": 0, 00:34:20.532 "w_mbytes_per_sec": 0 00:34:20.532 }, 00:34:20.532 "claimed": false, 00:34:20.532 "zoned": false, 00:34:20.532 "supported_io_types": { 00:34:20.532 "read": true, 00:34:20.532 "write": true, 00:34:20.532 "unmap": true, 00:34:20.532 "flush": true, 00:34:20.532 "reset": true, 00:34:20.532 "nvme_admin": false, 00:34:20.532 "nvme_io": false, 00:34:20.532 "nvme_io_md": false, 00:34:20.532 "write_zeroes": true, 00:34:20.532 "zcopy": true, 00:34:20.532 "get_zone_info": false, 00:34:20.532 "zone_management": false, 00:34:20.532 "zone_append": false, 00:34:20.532 "compare": false, 00:34:20.532 "compare_and_write": false, 00:34:20.532 "abort": true, 00:34:20.532 "seek_hole": false, 00:34:20.532 "seek_data": false, 00:34:20.532 "copy": true, 00:34:20.532 "nvme_iov_md": false 00:34:20.532 }, 00:34:20.532 "memory_domains": [ 00:34:20.532 { 00:34:20.532 "dma_device_id": "system", 00:34:20.532 "dma_device_type": 1 00:34:20.532 }, 00:34:20.532 { 00:34:20.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:20.532 "dma_device_type": 2 00:34:20.532 } 00:34:20.532 ], 00:34:20.532 "driver_specific": {} 00:34:20.532 } 00:34:20.532 ] 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.532 BaseBdev3 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.532 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.532 [ 00:34:20.532 { 00:34:20.532 "name": "BaseBdev3", 00:34:20.533 "aliases": [ 00:34:20.533 "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11" 00:34:20.533 ], 00:34:20.533 "product_name": "Malloc disk", 00:34:20.533 "block_size": 512, 00:34:20.533 "num_blocks": 65536, 00:34:20.533 "uuid": "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11", 00:34:20.533 "assigned_rate_limits": { 00:34:20.533 "rw_ios_per_sec": 0, 00:34:20.533 "rw_mbytes_per_sec": 0, 00:34:20.533 "r_mbytes_per_sec": 0, 00:34:20.533 "w_mbytes_per_sec": 0 00:34:20.533 }, 00:34:20.533 "claimed": false, 00:34:20.533 "zoned": false, 00:34:20.533 "supported_io_types": { 00:34:20.533 "read": true, 00:34:20.533 "write": true, 00:34:20.533 "unmap": true, 00:34:20.533 "flush": true, 00:34:20.533 "reset": true, 00:34:20.792 "nvme_admin": false, 00:34:20.792 "nvme_io": false, 00:34:20.792 "nvme_io_md": false, 00:34:20.792 "write_zeroes": true, 00:34:20.792 "zcopy": true, 00:34:20.792 "get_zone_info": false, 00:34:20.792 "zone_management": false, 00:34:20.792 "zone_append": false, 00:34:20.792 "compare": false, 00:34:20.792 "compare_and_write": false, 00:34:20.792 "abort": true, 00:34:20.792 "seek_hole": false, 00:34:20.792 "seek_data": false, 00:34:20.792 "copy": true, 00:34:20.792 "nvme_iov_md": false 00:34:20.792 }, 00:34:20.792 "memory_domains": [ 00:34:20.792 { 00:34:20.792 "dma_device_id": "system", 00:34:20.792 "dma_device_type": 1 00:34:20.792 }, 00:34:20.792 { 00:34:20.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:20.792 "dma_device_type": 2 00:34:20.792 } 00:34:20.792 ], 00:34:20.792 "driver_specific": {} 00:34:20.792 } 00:34:20.792 ] 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.792 [2024-12-09 23:16:01.180446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:20.792 [2024-12-09 23:16:01.180638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:20.792 [2024-12-09 23:16:01.180753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:20.792 [2024-12-09 23:16:01.183302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:20.792 "name": "Existed_Raid", 00:34:20.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.792 "strip_size_kb": 0, 00:34:20.792 "state": "configuring", 00:34:20.792 "raid_level": "raid1", 00:34:20.792 "superblock": false, 00:34:20.792 "num_base_bdevs": 3, 00:34:20.792 "num_base_bdevs_discovered": 2, 00:34:20.792 "num_base_bdevs_operational": 3, 00:34:20.792 "base_bdevs_list": [ 00:34:20.792 { 00:34:20.792 "name": "BaseBdev1", 00:34:20.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.792 "is_configured": false, 00:34:20.792 "data_offset": 0, 00:34:20.792 "data_size": 0 00:34:20.792 }, 00:34:20.792 { 00:34:20.792 "name": "BaseBdev2", 00:34:20.792 "uuid": "3208e41a-f43f-45ec-bff2-05f10555b765", 00:34:20.792 "is_configured": true, 00:34:20.792 "data_offset": 0, 00:34:20.792 "data_size": 65536 00:34:20.792 }, 00:34:20.792 { 00:34:20.792 "name": "BaseBdev3", 00:34:20.792 "uuid": "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11", 00:34:20.792 "is_configured": true, 00:34:20.792 "data_offset": 0, 00:34:20.792 "data_size": 65536 00:34:20.792 } 00:34:20.792 ] 00:34:20.792 }' 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:20.792 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.050 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:34:21.050 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.050 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.050 [2024-12-09 23:16:01.659775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:21.050 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.050 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:21.050 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:21.050 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:21.050 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:21.050 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:21.051 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:21.051 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:21.051 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:21.051 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:21.051 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:21.051 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:21.051 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.051 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.051 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:21.307 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.307 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:21.307 "name": "Existed_Raid", 00:34:21.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.307 "strip_size_kb": 0, 00:34:21.307 "state": "configuring", 00:34:21.307 "raid_level": "raid1", 00:34:21.307 "superblock": false, 00:34:21.307 "num_base_bdevs": 3, 00:34:21.307 "num_base_bdevs_discovered": 1, 00:34:21.307 "num_base_bdevs_operational": 3, 00:34:21.307 "base_bdevs_list": [ 00:34:21.307 { 00:34:21.307 "name": "BaseBdev1", 00:34:21.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.307 "is_configured": false, 00:34:21.307 "data_offset": 0, 00:34:21.307 "data_size": 0 00:34:21.307 }, 00:34:21.307 { 00:34:21.307 "name": null, 00:34:21.307 "uuid": "3208e41a-f43f-45ec-bff2-05f10555b765", 00:34:21.307 "is_configured": false, 00:34:21.307 "data_offset": 0, 00:34:21.307 "data_size": 65536 00:34:21.307 }, 00:34:21.307 { 00:34:21.307 "name": "BaseBdev3", 00:34:21.307 "uuid": "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11", 00:34:21.307 "is_configured": true, 00:34:21.307 "data_offset": 0, 00:34:21.307 "data_size": 65536 00:34:21.307 } 00:34:21.307 ] 00:34:21.307 }' 00:34:21.307 23:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:21.307 23:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.565 [2024-12-09 23:16:02.123636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:21.565 BaseBdev1 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.565 [ 00:34:21.565 { 00:34:21.565 "name": "BaseBdev1", 00:34:21.565 "aliases": [ 00:34:21.565 "c169687c-2c06-4b9f-b61f-5b0621c3efa3" 00:34:21.565 ], 00:34:21.565 "product_name": "Malloc disk", 00:34:21.565 "block_size": 512, 00:34:21.565 "num_blocks": 65536, 00:34:21.565 "uuid": "c169687c-2c06-4b9f-b61f-5b0621c3efa3", 00:34:21.565 "assigned_rate_limits": { 00:34:21.565 "rw_ios_per_sec": 0, 00:34:21.565 "rw_mbytes_per_sec": 0, 00:34:21.565 "r_mbytes_per_sec": 0, 00:34:21.565 "w_mbytes_per_sec": 0 00:34:21.565 }, 00:34:21.565 "claimed": true, 00:34:21.565 "claim_type": "exclusive_write", 00:34:21.565 "zoned": false, 00:34:21.565 "supported_io_types": { 00:34:21.565 "read": true, 00:34:21.565 "write": true, 00:34:21.565 "unmap": true, 00:34:21.565 "flush": true, 00:34:21.565 "reset": true, 00:34:21.565 "nvme_admin": false, 00:34:21.565 "nvme_io": false, 00:34:21.565 "nvme_io_md": false, 00:34:21.565 "write_zeroes": true, 00:34:21.565 "zcopy": true, 00:34:21.565 "get_zone_info": false, 00:34:21.565 "zone_management": false, 00:34:21.565 "zone_append": false, 00:34:21.565 "compare": false, 00:34:21.565 "compare_and_write": false, 00:34:21.565 "abort": true, 00:34:21.565 "seek_hole": false, 00:34:21.565 "seek_data": false, 00:34:21.565 "copy": true, 00:34:21.565 "nvme_iov_md": false 00:34:21.565 }, 00:34:21.565 "memory_domains": [ 00:34:21.565 { 00:34:21.565 "dma_device_id": "system", 00:34:21.565 "dma_device_type": 1 00:34:21.565 }, 00:34:21.565 { 00:34:21.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:21.565 "dma_device_type": 2 00:34:21.565 } 00:34:21.565 ], 00:34:21.565 "driver_specific": {} 00:34:21.565 } 00:34:21.565 ] 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.565 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.566 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.824 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:21.824 "name": "Existed_Raid", 00:34:21.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.825 "strip_size_kb": 0, 00:34:21.825 "state": "configuring", 00:34:21.825 "raid_level": "raid1", 00:34:21.825 "superblock": false, 00:34:21.825 "num_base_bdevs": 3, 00:34:21.825 "num_base_bdevs_discovered": 2, 00:34:21.825 "num_base_bdevs_operational": 3, 00:34:21.825 "base_bdevs_list": [ 00:34:21.825 { 00:34:21.825 "name": "BaseBdev1", 00:34:21.825 "uuid": "c169687c-2c06-4b9f-b61f-5b0621c3efa3", 00:34:21.825 "is_configured": true, 00:34:21.825 "data_offset": 0, 00:34:21.825 "data_size": 65536 00:34:21.825 }, 00:34:21.825 { 00:34:21.825 "name": null, 00:34:21.825 "uuid": "3208e41a-f43f-45ec-bff2-05f10555b765", 00:34:21.825 "is_configured": false, 00:34:21.825 "data_offset": 0, 00:34:21.825 "data_size": 65536 00:34:21.825 }, 00:34:21.825 { 00:34:21.825 "name": "BaseBdev3", 00:34:21.825 "uuid": "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11", 00:34:21.825 "is_configured": true, 00:34:21.825 "data_offset": 0, 00:34:21.825 "data_size": 65536 00:34:21.825 } 00:34:21.825 ] 00:34:21.825 }' 00:34:21.825 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:21.825 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.083 [2024-12-09 23:16:02.706954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:22.083 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:22.084 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:22.084 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:22.084 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:22.084 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:22.084 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:22.084 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:22.343 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.343 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.343 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.343 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:22.343 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.343 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:22.343 "name": "Existed_Raid", 00:34:22.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.343 "strip_size_kb": 0, 00:34:22.343 "state": "configuring", 00:34:22.343 "raid_level": "raid1", 00:34:22.343 "superblock": false, 00:34:22.343 "num_base_bdevs": 3, 00:34:22.343 "num_base_bdevs_discovered": 1, 00:34:22.343 "num_base_bdevs_operational": 3, 00:34:22.343 "base_bdevs_list": [ 00:34:22.343 { 00:34:22.343 "name": "BaseBdev1", 00:34:22.343 "uuid": "c169687c-2c06-4b9f-b61f-5b0621c3efa3", 00:34:22.343 "is_configured": true, 00:34:22.343 "data_offset": 0, 00:34:22.343 "data_size": 65536 00:34:22.343 }, 00:34:22.343 { 00:34:22.343 "name": null, 00:34:22.343 "uuid": "3208e41a-f43f-45ec-bff2-05f10555b765", 00:34:22.343 "is_configured": false, 00:34:22.343 "data_offset": 0, 00:34:22.343 "data_size": 65536 00:34:22.343 }, 00:34:22.343 { 00:34:22.343 "name": null, 00:34:22.343 "uuid": "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11", 00:34:22.343 "is_configured": false, 00:34:22.343 "data_offset": 0, 00:34:22.343 "data_size": 65536 00:34:22.343 } 00:34:22.343 ] 00:34:22.343 }' 00:34:22.343 23:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:22.343 23:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.602 [2024-12-09 23:16:03.214477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.602 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.862 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.862 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:22.862 "name": "Existed_Raid", 00:34:22.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.862 "strip_size_kb": 0, 00:34:22.862 "state": "configuring", 00:34:22.862 "raid_level": "raid1", 00:34:22.862 "superblock": false, 00:34:22.862 "num_base_bdevs": 3, 00:34:22.862 "num_base_bdevs_discovered": 2, 00:34:22.862 "num_base_bdevs_operational": 3, 00:34:22.862 "base_bdevs_list": [ 00:34:22.862 { 00:34:22.862 "name": "BaseBdev1", 00:34:22.862 "uuid": "c169687c-2c06-4b9f-b61f-5b0621c3efa3", 00:34:22.862 "is_configured": true, 00:34:22.862 "data_offset": 0, 00:34:22.862 "data_size": 65536 00:34:22.862 }, 00:34:22.862 { 00:34:22.862 "name": null, 00:34:22.862 "uuid": "3208e41a-f43f-45ec-bff2-05f10555b765", 00:34:22.862 "is_configured": false, 00:34:22.862 "data_offset": 0, 00:34:22.862 "data_size": 65536 00:34:22.862 }, 00:34:22.862 { 00:34:22.862 "name": "BaseBdev3", 00:34:22.862 "uuid": "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11", 00:34:22.862 "is_configured": true, 00:34:22.862 "data_offset": 0, 00:34:22.862 "data_size": 65536 00:34:22.862 } 00:34:22.862 ] 00:34:22.862 }' 00:34:22.862 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:22.862 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.121 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:23.121 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:23.121 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.121 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.121 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.121 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:34:23.121 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:23.121 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.121 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.121 [2024-12-09 23:16:03.718391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:23.380 "name": "Existed_Raid", 00:34:23.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.380 "strip_size_kb": 0, 00:34:23.380 "state": "configuring", 00:34:23.380 "raid_level": "raid1", 00:34:23.380 "superblock": false, 00:34:23.380 "num_base_bdevs": 3, 00:34:23.380 "num_base_bdevs_discovered": 1, 00:34:23.380 "num_base_bdevs_operational": 3, 00:34:23.380 "base_bdevs_list": [ 00:34:23.380 { 00:34:23.380 "name": null, 00:34:23.380 "uuid": "c169687c-2c06-4b9f-b61f-5b0621c3efa3", 00:34:23.380 "is_configured": false, 00:34:23.380 "data_offset": 0, 00:34:23.380 "data_size": 65536 00:34:23.380 }, 00:34:23.380 { 00:34:23.380 "name": null, 00:34:23.380 "uuid": "3208e41a-f43f-45ec-bff2-05f10555b765", 00:34:23.380 "is_configured": false, 00:34:23.380 "data_offset": 0, 00:34:23.380 "data_size": 65536 00:34:23.380 }, 00:34:23.380 { 00:34:23.380 "name": "BaseBdev3", 00:34:23.380 "uuid": "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11", 00:34:23.380 "is_configured": true, 00:34:23.380 "data_offset": 0, 00:34:23.380 "data_size": 65536 00:34:23.380 } 00:34:23.380 ] 00:34:23.380 }' 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:23.380 23:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.638 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:23.638 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.897 [2024-12-09 23:16:04.317583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:23.897 "name": "Existed_Raid", 00:34:23.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.897 "strip_size_kb": 0, 00:34:23.897 "state": "configuring", 00:34:23.897 "raid_level": "raid1", 00:34:23.897 "superblock": false, 00:34:23.897 "num_base_bdevs": 3, 00:34:23.897 "num_base_bdevs_discovered": 2, 00:34:23.897 "num_base_bdevs_operational": 3, 00:34:23.897 "base_bdevs_list": [ 00:34:23.897 { 00:34:23.897 "name": null, 00:34:23.897 "uuid": "c169687c-2c06-4b9f-b61f-5b0621c3efa3", 00:34:23.897 "is_configured": false, 00:34:23.897 "data_offset": 0, 00:34:23.897 "data_size": 65536 00:34:23.897 }, 00:34:23.897 { 00:34:23.897 "name": "BaseBdev2", 00:34:23.897 "uuid": "3208e41a-f43f-45ec-bff2-05f10555b765", 00:34:23.897 "is_configured": true, 00:34:23.897 "data_offset": 0, 00:34:23.897 "data_size": 65536 00:34:23.897 }, 00:34:23.897 { 00:34:23.897 "name": "BaseBdev3", 00:34:23.897 "uuid": "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11", 00:34:23.897 "is_configured": true, 00:34:23.897 "data_offset": 0, 00:34:23.897 "data_size": 65536 00:34:23.897 } 00:34:23.897 ] 00:34:23.897 }' 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:23.897 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.155 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:24.155 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:24.156 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.156 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.156 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.156 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:34:24.156 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:24.156 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.156 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.156 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c169687c-2c06-4b9f-b61f-5b0621c3efa3 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.414 [2024-12-09 23:16:04.870278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:24.414 [2024-12-09 23:16:04.870335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:24.414 [2024-12-09 23:16:04.870344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:34:24.414 [2024-12-09 23:16:04.870644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:24.414 [2024-12-09 23:16:04.870788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:24.414 [2024-12-09 23:16:04.870803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:34:24.414 [2024-12-09 23:16:04.871050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:24.414 NewBaseBdev 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.414 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.414 [ 00:34:24.414 { 00:34:24.414 "name": "NewBaseBdev", 00:34:24.414 "aliases": [ 00:34:24.414 "c169687c-2c06-4b9f-b61f-5b0621c3efa3" 00:34:24.414 ], 00:34:24.414 "product_name": "Malloc disk", 00:34:24.414 "block_size": 512, 00:34:24.414 "num_blocks": 65536, 00:34:24.414 "uuid": "c169687c-2c06-4b9f-b61f-5b0621c3efa3", 00:34:24.414 "assigned_rate_limits": { 00:34:24.414 "rw_ios_per_sec": 0, 00:34:24.414 "rw_mbytes_per_sec": 0, 00:34:24.414 "r_mbytes_per_sec": 0, 00:34:24.415 "w_mbytes_per_sec": 0 00:34:24.415 }, 00:34:24.415 "claimed": true, 00:34:24.415 "claim_type": "exclusive_write", 00:34:24.415 "zoned": false, 00:34:24.415 "supported_io_types": { 00:34:24.415 "read": true, 00:34:24.415 "write": true, 00:34:24.415 "unmap": true, 00:34:24.415 "flush": true, 00:34:24.415 "reset": true, 00:34:24.415 "nvme_admin": false, 00:34:24.415 "nvme_io": false, 00:34:24.415 "nvme_io_md": false, 00:34:24.415 "write_zeroes": true, 00:34:24.415 "zcopy": true, 00:34:24.415 "get_zone_info": false, 00:34:24.415 "zone_management": false, 00:34:24.415 "zone_append": false, 00:34:24.415 "compare": false, 00:34:24.415 "compare_and_write": false, 00:34:24.415 "abort": true, 00:34:24.415 "seek_hole": false, 00:34:24.415 "seek_data": false, 00:34:24.415 "copy": true, 00:34:24.415 "nvme_iov_md": false 00:34:24.415 }, 00:34:24.415 "memory_domains": [ 00:34:24.415 { 00:34:24.415 "dma_device_id": "system", 00:34:24.415 "dma_device_type": 1 00:34:24.415 }, 00:34:24.415 { 00:34:24.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:24.415 "dma_device_type": 2 00:34:24.415 } 00:34:24.415 ], 00:34:24.415 "driver_specific": {} 00:34:24.415 } 00:34:24.415 ] 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:24.415 "name": "Existed_Raid", 00:34:24.415 "uuid": "81a72082-1ba2-420a-a481-ed97670fa7c9", 00:34:24.415 "strip_size_kb": 0, 00:34:24.415 "state": "online", 00:34:24.415 "raid_level": "raid1", 00:34:24.415 "superblock": false, 00:34:24.415 "num_base_bdevs": 3, 00:34:24.415 "num_base_bdevs_discovered": 3, 00:34:24.415 "num_base_bdevs_operational": 3, 00:34:24.415 "base_bdevs_list": [ 00:34:24.415 { 00:34:24.415 "name": "NewBaseBdev", 00:34:24.415 "uuid": "c169687c-2c06-4b9f-b61f-5b0621c3efa3", 00:34:24.415 "is_configured": true, 00:34:24.415 "data_offset": 0, 00:34:24.415 "data_size": 65536 00:34:24.415 }, 00:34:24.415 { 00:34:24.415 "name": "BaseBdev2", 00:34:24.415 "uuid": "3208e41a-f43f-45ec-bff2-05f10555b765", 00:34:24.415 "is_configured": true, 00:34:24.415 "data_offset": 0, 00:34:24.415 "data_size": 65536 00:34:24.415 }, 00:34:24.415 { 00:34:24.415 "name": "BaseBdev3", 00:34:24.415 "uuid": "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11", 00:34:24.415 "is_configured": true, 00:34:24.415 "data_offset": 0, 00:34:24.415 "data_size": 65536 00:34:24.415 } 00:34:24.415 ] 00:34:24.415 }' 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:24.415 23:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.980 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:34:24.980 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:24.980 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:24.981 [2024-12-09 23:16:05.342319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:24.981 "name": "Existed_Raid", 00:34:24.981 "aliases": [ 00:34:24.981 "81a72082-1ba2-420a-a481-ed97670fa7c9" 00:34:24.981 ], 00:34:24.981 "product_name": "Raid Volume", 00:34:24.981 "block_size": 512, 00:34:24.981 "num_blocks": 65536, 00:34:24.981 "uuid": "81a72082-1ba2-420a-a481-ed97670fa7c9", 00:34:24.981 "assigned_rate_limits": { 00:34:24.981 "rw_ios_per_sec": 0, 00:34:24.981 "rw_mbytes_per_sec": 0, 00:34:24.981 "r_mbytes_per_sec": 0, 00:34:24.981 "w_mbytes_per_sec": 0 00:34:24.981 }, 00:34:24.981 "claimed": false, 00:34:24.981 "zoned": false, 00:34:24.981 "supported_io_types": { 00:34:24.981 "read": true, 00:34:24.981 "write": true, 00:34:24.981 "unmap": false, 00:34:24.981 "flush": false, 00:34:24.981 "reset": true, 00:34:24.981 "nvme_admin": false, 00:34:24.981 "nvme_io": false, 00:34:24.981 "nvme_io_md": false, 00:34:24.981 "write_zeroes": true, 00:34:24.981 "zcopy": false, 00:34:24.981 "get_zone_info": false, 00:34:24.981 "zone_management": false, 00:34:24.981 "zone_append": false, 00:34:24.981 "compare": false, 00:34:24.981 "compare_and_write": false, 00:34:24.981 "abort": false, 00:34:24.981 "seek_hole": false, 00:34:24.981 "seek_data": false, 00:34:24.981 "copy": false, 00:34:24.981 "nvme_iov_md": false 00:34:24.981 }, 00:34:24.981 "memory_domains": [ 00:34:24.981 { 00:34:24.981 "dma_device_id": "system", 00:34:24.981 "dma_device_type": 1 00:34:24.981 }, 00:34:24.981 { 00:34:24.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:24.981 "dma_device_type": 2 00:34:24.981 }, 00:34:24.981 { 00:34:24.981 "dma_device_id": "system", 00:34:24.981 "dma_device_type": 1 00:34:24.981 }, 00:34:24.981 { 00:34:24.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:24.981 "dma_device_type": 2 00:34:24.981 }, 00:34:24.981 { 00:34:24.981 "dma_device_id": "system", 00:34:24.981 "dma_device_type": 1 00:34:24.981 }, 00:34:24.981 { 00:34:24.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:24.981 "dma_device_type": 2 00:34:24.981 } 00:34:24.981 ], 00:34:24.981 "driver_specific": { 00:34:24.981 "raid": { 00:34:24.981 "uuid": "81a72082-1ba2-420a-a481-ed97670fa7c9", 00:34:24.981 "strip_size_kb": 0, 00:34:24.981 "state": "online", 00:34:24.981 "raid_level": "raid1", 00:34:24.981 "superblock": false, 00:34:24.981 "num_base_bdevs": 3, 00:34:24.981 "num_base_bdevs_discovered": 3, 00:34:24.981 "num_base_bdevs_operational": 3, 00:34:24.981 "base_bdevs_list": [ 00:34:24.981 { 00:34:24.981 "name": "NewBaseBdev", 00:34:24.981 "uuid": "c169687c-2c06-4b9f-b61f-5b0621c3efa3", 00:34:24.981 "is_configured": true, 00:34:24.981 "data_offset": 0, 00:34:24.981 "data_size": 65536 00:34:24.981 }, 00:34:24.981 { 00:34:24.981 "name": "BaseBdev2", 00:34:24.981 "uuid": "3208e41a-f43f-45ec-bff2-05f10555b765", 00:34:24.981 "is_configured": true, 00:34:24.981 "data_offset": 0, 00:34:24.981 "data_size": 65536 00:34:24.981 }, 00:34:24.981 { 00:34:24.981 "name": "BaseBdev3", 00:34:24.981 "uuid": "dc8634f5-ae35-4a5f-8b9e-a9554dc9ca11", 00:34:24.981 "is_configured": true, 00:34:24.981 "data_offset": 0, 00:34:24.981 "data_size": 65536 00:34:24.981 } 00:34:24.981 ] 00:34:24.981 } 00:34:24.981 } 00:34:24.981 }' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:34:24.981 BaseBdev2 00:34:24.981 BaseBdev3' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.981 [2024-12-09 23:16:05.601633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:24.981 [2024-12-09 23:16:05.601675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:24.981 [2024-12-09 23:16:05.601760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:24.981 [2024-12-09 23:16:05.602053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:24.981 [2024-12-09 23:16:05.602074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67279 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67279 ']' 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67279 00:34:24.981 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:34:25.241 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.241 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67279 00:34:25.241 killing process with pid 67279 00:34:25.241 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:25.241 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:25.241 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67279' 00:34:25.241 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67279 00:34:25.241 [2024-12-09 23:16:05.647359] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:25.241 23:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67279 00:34:25.501 [2024-12-09 23:16:05.953113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:26.882 ************************************ 00:34:26.882 END TEST raid_state_function_test 00:34:26.882 ************************************ 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:34:26.882 00:34:26.882 real 0m10.792s 00:34:26.882 user 0m17.150s 00:34:26.882 sys 0m2.084s 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:26.882 23:16:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:34:26.882 23:16:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:26.882 23:16:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.882 23:16:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:26.882 ************************************ 00:34:26.882 START TEST raid_state_function_test_sb 00:34:26.882 ************************************ 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67900 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67900' 00:34:26.882 Process raid pid: 67900 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67900 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67900 ']' 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.882 23:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.883 [2024-12-09 23:16:07.279069] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:34:26.883 [2024-12-09 23:16:07.279203] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.883 [2024-12-09 23:16:07.459635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.141 [2024-12-09 23:16:07.579654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.403 [2024-12-09 23:16:07.793135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:27.403 [2024-12-09 23:16:07.793190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.662 [2024-12-09 23:16:08.133693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:27.662 [2024-12-09 23:16:08.133757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:27.662 [2024-12-09 23:16:08.133775] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:27.662 [2024-12-09 23:16:08.133789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:27.662 [2024-12-09 23:16:08.133797] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:27.662 [2024-12-09 23:16:08.133809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:27.662 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:27.663 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:27.663 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:27.663 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.663 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:27.663 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.663 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.663 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:27.663 "name": "Existed_Raid", 00:34:27.663 "uuid": "7d596cc4-23ef-4dd7-b99a-1dc0560c3bc1", 00:34:27.663 "strip_size_kb": 0, 00:34:27.663 "state": "configuring", 00:34:27.663 "raid_level": "raid1", 00:34:27.663 "superblock": true, 00:34:27.663 "num_base_bdevs": 3, 00:34:27.663 "num_base_bdevs_discovered": 0, 00:34:27.663 "num_base_bdevs_operational": 3, 00:34:27.663 "base_bdevs_list": [ 00:34:27.663 { 00:34:27.663 "name": "BaseBdev1", 00:34:27.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.663 "is_configured": false, 00:34:27.663 "data_offset": 0, 00:34:27.663 "data_size": 0 00:34:27.663 }, 00:34:27.663 { 00:34:27.663 "name": "BaseBdev2", 00:34:27.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.663 "is_configured": false, 00:34:27.663 "data_offset": 0, 00:34:27.663 "data_size": 0 00:34:27.663 }, 00:34:27.663 { 00:34:27.663 "name": "BaseBdev3", 00:34:27.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.663 "is_configured": false, 00:34:27.663 "data_offset": 0, 00:34:27.663 "data_size": 0 00:34:27.663 } 00:34:27.663 ] 00:34:27.663 }' 00:34:27.663 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:27.663 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.229 [2024-12-09 23:16:08.581034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:28.229 [2024-12-09 23:16:08.581077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.229 [2024-12-09 23:16:08.593032] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:28.229 [2024-12-09 23:16:08.593086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:28.229 [2024-12-09 23:16:08.593098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:28.229 [2024-12-09 23:16:08.593114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:28.229 [2024-12-09 23:16:08.593124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:28.229 [2024-12-09 23:16:08.593139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.229 [2024-12-09 23:16:08.642341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:28.229 BaseBdev1 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.229 [ 00:34:28.229 { 00:34:28.229 "name": "BaseBdev1", 00:34:28.229 "aliases": [ 00:34:28.229 "382a4c83-a099-412b-9fd9-8d881c0327aa" 00:34:28.229 ], 00:34:28.229 "product_name": "Malloc disk", 00:34:28.229 "block_size": 512, 00:34:28.229 "num_blocks": 65536, 00:34:28.229 "uuid": "382a4c83-a099-412b-9fd9-8d881c0327aa", 00:34:28.229 "assigned_rate_limits": { 00:34:28.229 "rw_ios_per_sec": 0, 00:34:28.229 "rw_mbytes_per_sec": 0, 00:34:28.229 "r_mbytes_per_sec": 0, 00:34:28.229 "w_mbytes_per_sec": 0 00:34:28.229 }, 00:34:28.229 "claimed": true, 00:34:28.229 "claim_type": "exclusive_write", 00:34:28.229 "zoned": false, 00:34:28.229 "supported_io_types": { 00:34:28.229 "read": true, 00:34:28.229 "write": true, 00:34:28.229 "unmap": true, 00:34:28.229 "flush": true, 00:34:28.229 "reset": true, 00:34:28.229 "nvme_admin": false, 00:34:28.229 "nvme_io": false, 00:34:28.229 "nvme_io_md": false, 00:34:28.229 "write_zeroes": true, 00:34:28.229 "zcopy": true, 00:34:28.229 "get_zone_info": false, 00:34:28.229 "zone_management": false, 00:34:28.229 "zone_append": false, 00:34:28.229 "compare": false, 00:34:28.229 "compare_and_write": false, 00:34:28.229 "abort": true, 00:34:28.229 "seek_hole": false, 00:34:28.229 "seek_data": false, 00:34:28.229 "copy": true, 00:34:28.229 "nvme_iov_md": false 00:34:28.229 }, 00:34:28.229 "memory_domains": [ 00:34:28.229 { 00:34:28.229 "dma_device_id": "system", 00:34:28.229 "dma_device_type": 1 00:34:28.229 }, 00:34:28.229 { 00:34:28.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:28.229 "dma_device_type": 2 00:34:28.229 } 00:34:28.229 ], 00:34:28.229 "driver_specific": {} 00:34:28.229 } 00:34:28.229 ] 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:28.229 "name": "Existed_Raid", 00:34:28.229 "uuid": "dd438c96-56e8-4f9e-8d7e-c65a78cf32df", 00:34:28.229 "strip_size_kb": 0, 00:34:28.229 "state": "configuring", 00:34:28.229 "raid_level": "raid1", 00:34:28.229 "superblock": true, 00:34:28.229 "num_base_bdevs": 3, 00:34:28.229 "num_base_bdevs_discovered": 1, 00:34:28.229 "num_base_bdevs_operational": 3, 00:34:28.229 "base_bdevs_list": [ 00:34:28.229 { 00:34:28.229 "name": "BaseBdev1", 00:34:28.229 "uuid": "382a4c83-a099-412b-9fd9-8d881c0327aa", 00:34:28.229 "is_configured": true, 00:34:28.229 "data_offset": 2048, 00:34:28.229 "data_size": 63488 00:34:28.229 }, 00:34:28.229 { 00:34:28.229 "name": "BaseBdev2", 00:34:28.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.229 "is_configured": false, 00:34:28.229 "data_offset": 0, 00:34:28.229 "data_size": 0 00:34:28.229 }, 00:34:28.229 { 00:34:28.229 "name": "BaseBdev3", 00:34:28.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.229 "is_configured": false, 00:34:28.229 "data_offset": 0, 00:34:28.229 "data_size": 0 00:34:28.229 } 00:34:28.229 ] 00:34:28.229 }' 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:28.229 23:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.488 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:28.488 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.488 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.488 [2024-12-09 23:16:09.114368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:28.488 [2024-12-09 23:16:09.114439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:28.488 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.488 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:28.488 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.488 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.488 [2024-12-09 23:16:09.122450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:28.747 [2024-12-09 23:16:09.124749] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:28.747 [2024-12-09 23:16:09.124799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:28.747 [2024-12-09 23:16:09.124811] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:28.747 [2024-12-09 23:16:09.124825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:28.747 "name": "Existed_Raid", 00:34:28.747 "uuid": "0ee69278-d696-4869-9427-309d47e23e6d", 00:34:28.747 "strip_size_kb": 0, 00:34:28.747 "state": "configuring", 00:34:28.747 "raid_level": "raid1", 00:34:28.747 "superblock": true, 00:34:28.747 "num_base_bdevs": 3, 00:34:28.747 "num_base_bdevs_discovered": 1, 00:34:28.747 "num_base_bdevs_operational": 3, 00:34:28.747 "base_bdevs_list": [ 00:34:28.747 { 00:34:28.747 "name": "BaseBdev1", 00:34:28.747 "uuid": "382a4c83-a099-412b-9fd9-8d881c0327aa", 00:34:28.747 "is_configured": true, 00:34:28.747 "data_offset": 2048, 00:34:28.747 "data_size": 63488 00:34:28.747 }, 00:34:28.747 { 00:34:28.747 "name": "BaseBdev2", 00:34:28.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.747 "is_configured": false, 00:34:28.747 "data_offset": 0, 00:34:28.747 "data_size": 0 00:34:28.747 }, 00:34:28.747 { 00:34:28.747 "name": "BaseBdev3", 00:34:28.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.747 "is_configured": false, 00:34:28.747 "data_offset": 0, 00:34:28.747 "data_size": 0 00:34:28.747 } 00:34:28.747 ] 00:34:28.747 }' 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:28.747 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.007 [2024-12-09 23:16:09.601489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:29.007 BaseBdev2 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.007 [ 00:34:29.007 { 00:34:29.007 "name": "BaseBdev2", 00:34:29.007 "aliases": [ 00:34:29.007 "beb21e20-d33b-47a9-af60-033f4ff9c521" 00:34:29.007 ], 00:34:29.007 "product_name": "Malloc disk", 00:34:29.007 "block_size": 512, 00:34:29.007 "num_blocks": 65536, 00:34:29.007 "uuid": "beb21e20-d33b-47a9-af60-033f4ff9c521", 00:34:29.007 "assigned_rate_limits": { 00:34:29.007 "rw_ios_per_sec": 0, 00:34:29.007 "rw_mbytes_per_sec": 0, 00:34:29.007 "r_mbytes_per_sec": 0, 00:34:29.007 "w_mbytes_per_sec": 0 00:34:29.007 }, 00:34:29.007 "claimed": true, 00:34:29.007 "claim_type": "exclusive_write", 00:34:29.007 "zoned": false, 00:34:29.007 "supported_io_types": { 00:34:29.007 "read": true, 00:34:29.007 "write": true, 00:34:29.007 "unmap": true, 00:34:29.007 "flush": true, 00:34:29.007 "reset": true, 00:34:29.007 "nvme_admin": false, 00:34:29.007 "nvme_io": false, 00:34:29.007 "nvme_io_md": false, 00:34:29.007 "write_zeroes": true, 00:34:29.007 "zcopy": true, 00:34:29.007 "get_zone_info": false, 00:34:29.007 "zone_management": false, 00:34:29.007 "zone_append": false, 00:34:29.007 "compare": false, 00:34:29.007 "compare_and_write": false, 00:34:29.007 "abort": true, 00:34:29.007 "seek_hole": false, 00:34:29.007 "seek_data": false, 00:34:29.007 "copy": true, 00:34:29.007 "nvme_iov_md": false 00:34:29.007 }, 00:34:29.007 "memory_domains": [ 00:34:29.007 { 00:34:29.007 "dma_device_id": "system", 00:34:29.007 "dma_device_type": 1 00:34:29.007 }, 00:34:29.007 { 00:34:29.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:29.007 "dma_device_type": 2 00:34:29.007 } 00:34:29.007 ], 00:34:29.007 "driver_specific": {} 00:34:29.007 } 00:34:29.007 ] 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.007 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:29.266 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.266 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:29.266 "name": "Existed_Raid", 00:34:29.266 "uuid": "0ee69278-d696-4869-9427-309d47e23e6d", 00:34:29.266 "strip_size_kb": 0, 00:34:29.266 "state": "configuring", 00:34:29.266 "raid_level": "raid1", 00:34:29.266 "superblock": true, 00:34:29.266 "num_base_bdevs": 3, 00:34:29.266 "num_base_bdevs_discovered": 2, 00:34:29.266 "num_base_bdevs_operational": 3, 00:34:29.266 "base_bdevs_list": [ 00:34:29.266 { 00:34:29.266 "name": "BaseBdev1", 00:34:29.266 "uuid": "382a4c83-a099-412b-9fd9-8d881c0327aa", 00:34:29.266 "is_configured": true, 00:34:29.266 "data_offset": 2048, 00:34:29.266 "data_size": 63488 00:34:29.266 }, 00:34:29.266 { 00:34:29.266 "name": "BaseBdev2", 00:34:29.266 "uuid": "beb21e20-d33b-47a9-af60-033f4ff9c521", 00:34:29.266 "is_configured": true, 00:34:29.266 "data_offset": 2048, 00:34:29.266 "data_size": 63488 00:34:29.266 }, 00:34:29.266 { 00:34:29.266 "name": "BaseBdev3", 00:34:29.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.266 "is_configured": false, 00:34:29.266 "data_offset": 0, 00:34:29.266 "data_size": 0 00:34:29.266 } 00:34:29.266 ] 00:34:29.266 }' 00:34:29.266 23:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:29.266 23:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.526 [2024-12-09 23:16:10.103358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:29.526 [2024-12-09 23:16:10.103698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:29.526 [2024-12-09 23:16:10.103724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:29.526 [2024-12-09 23:16:10.104054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:29.526 BaseBdev3 00:34:29.526 [2024-12-09 23:16:10.104230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:29.526 [2024-12-09 23:16:10.104253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:29.526 [2024-12-09 23:16:10.104424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.526 [ 00:34:29.526 { 00:34:29.526 "name": "BaseBdev3", 00:34:29.526 "aliases": [ 00:34:29.526 "90c13067-cafe-47f3-9095-7ab0bbc734d8" 00:34:29.526 ], 00:34:29.526 "product_name": "Malloc disk", 00:34:29.526 "block_size": 512, 00:34:29.526 "num_blocks": 65536, 00:34:29.526 "uuid": "90c13067-cafe-47f3-9095-7ab0bbc734d8", 00:34:29.526 "assigned_rate_limits": { 00:34:29.526 "rw_ios_per_sec": 0, 00:34:29.526 "rw_mbytes_per_sec": 0, 00:34:29.526 "r_mbytes_per_sec": 0, 00:34:29.526 "w_mbytes_per_sec": 0 00:34:29.526 }, 00:34:29.526 "claimed": true, 00:34:29.526 "claim_type": "exclusive_write", 00:34:29.526 "zoned": false, 00:34:29.526 "supported_io_types": { 00:34:29.526 "read": true, 00:34:29.526 "write": true, 00:34:29.526 "unmap": true, 00:34:29.526 "flush": true, 00:34:29.526 "reset": true, 00:34:29.526 "nvme_admin": false, 00:34:29.526 "nvme_io": false, 00:34:29.526 "nvme_io_md": false, 00:34:29.526 "write_zeroes": true, 00:34:29.526 "zcopy": true, 00:34:29.526 "get_zone_info": false, 00:34:29.526 "zone_management": false, 00:34:29.526 "zone_append": false, 00:34:29.526 "compare": false, 00:34:29.526 "compare_and_write": false, 00:34:29.526 "abort": true, 00:34:29.526 "seek_hole": false, 00:34:29.526 "seek_data": false, 00:34:29.526 "copy": true, 00:34:29.526 "nvme_iov_md": false 00:34:29.526 }, 00:34:29.526 "memory_domains": [ 00:34:29.526 { 00:34:29.526 "dma_device_id": "system", 00:34:29.526 "dma_device_type": 1 00:34:29.526 }, 00:34:29.526 { 00:34:29.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:29.526 "dma_device_type": 2 00:34:29.526 } 00:34:29.526 ], 00:34:29.526 "driver_specific": {} 00:34:29.526 } 00:34:29.526 ] 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:29.526 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.784 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:29.784 "name": "Existed_Raid", 00:34:29.784 "uuid": "0ee69278-d696-4869-9427-309d47e23e6d", 00:34:29.784 "strip_size_kb": 0, 00:34:29.784 "state": "online", 00:34:29.784 "raid_level": "raid1", 00:34:29.784 "superblock": true, 00:34:29.784 "num_base_bdevs": 3, 00:34:29.784 "num_base_bdevs_discovered": 3, 00:34:29.784 "num_base_bdevs_operational": 3, 00:34:29.784 "base_bdevs_list": [ 00:34:29.784 { 00:34:29.784 "name": "BaseBdev1", 00:34:29.784 "uuid": "382a4c83-a099-412b-9fd9-8d881c0327aa", 00:34:29.784 "is_configured": true, 00:34:29.784 "data_offset": 2048, 00:34:29.784 "data_size": 63488 00:34:29.784 }, 00:34:29.784 { 00:34:29.784 "name": "BaseBdev2", 00:34:29.784 "uuid": "beb21e20-d33b-47a9-af60-033f4ff9c521", 00:34:29.784 "is_configured": true, 00:34:29.784 "data_offset": 2048, 00:34:29.784 "data_size": 63488 00:34:29.784 }, 00:34:29.784 { 00:34:29.784 "name": "BaseBdev3", 00:34:29.784 "uuid": "90c13067-cafe-47f3-9095-7ab0bbc734d8", 00:34:29.784 "is_configured": true, 00:34:29.784 "data_offset": 2048, 00:34:29.784 "data_size": 63488 00:34:29.784 } 00:34:29.784 ] 00:34:29.784 }' 00:34:29.784 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:29.784 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.042 [2024-12-09 23:16:10.539125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.042 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:30.042 "name": "Existed_Raid", 00:34:30.042 "aliases": [ 00:34:30.042 "0ee69278-d696-4869-9427-309d47e23e6d" 00:34:30.042 ], 00:34:30.043 "product_name": "Raid Volume", 00:34:30.043 "block_size": 512, 00:34:30.043 "num_blocks": 63488, 00:34:30.043 "uuid": "0ee69278-d696-4869-9427-309d47e23e6d", 00:34:30.043 "assigned_rate_limits": { 00:34:30.043 "rw_ios_per_sec": 0, 00:34:30.043 "rw_mbytes_per_sec": 0, 00:34:30.043 "r_mbytes_per_sec": 0, 00:34:30.043 "w_mbytes_per_sec": 0 00:34:30.043 }, 00:34:30.043 "claimed": false, 00:34:30.043 "zoned": false, 00:34:30.043 "supported_io_types": { 00:34:30.043 "read": true, 00:34:30.043 "write": true, 00:34:30.043 "unmap": false, 00:34:30.043 "flush": false, 00:34:30.043 "reset": true, 00:34:30.043 "nvme_admin": false, 00:34:30.043 "nvme_io": false, 00:34:30.043 "nvme_io_md": false, 00:34:30.043 "write_zeroes": true, 00:34:30.043 "zcopy": false, 00:34:30.043 "get_zone_info": false, 00:34:30.043 "zone_management": false, 00:34:30.043 "zone_append": false, 00:34:30.043 "compare": false, 00:34:30.043 "compare_and_write": false, 00:34:30.043 "abort": false, 00:34:30.043 "seek_hole": false, 00:34:30.043 "seek_data": false, 00:34:30.043 "copy": false, 00:34:30.043 "nvme_iov_md": false 00:34:30.043 }, 00:34:30.043 "memory_domains": [ 00:34:30.043 { 00:34:30.043 "dma_device_id": "system", 00:34:30.043 "dma_device_type": 1 00:34:30.043 }, 00:34:30.043 { 00:34:30.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:30.043 "dma_device_type": 2 00:34:30.043 }, 00:34:30.043 { 00:34:30.043 "dma_device_id": "system", 00:34:30.043 "dma_device_type": 1 00:34:30.043 }, 00:34:30.043 { 00:34:30.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:30.043 "dma_device_type": 2 00:34:30.043 }, 00:34:30.043 { 00:34:30.043 "dma_device_id": "system", 00:34:30.043 "dma_device_type": 1 00:34:30.043 }, 00:34:30.043 { 00:34:30.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:30.043 "dma_device_type": 2 00:34:30.043 } 00:34:30.043 ], 00:34:30.043 "driver_specific": { 00:34:30.043 "raid": { 00:34:30.043 "uuid": "0ee69278-d696-4869-9427-309d47e23e6d", 00:34:30.043 "strip_size_kb": 0, 00:34:30.043 "state": "online", 00:34:30.043 "raid_level": "raid1", 00:34:30.043 "superblock": true, 00:34:30.043 "num_base_bdevs": 3, 00:34:30.043 "num_base_bdevs_discovered": 3, 00:34:30.043 "num_base_bdevs_operational": 3, 00:34:30.043 "base_bdevs_list": [ 00:34:30.043 { 00:34:30.043 "name": "BaseBdev1", 00:34:30.043 "uuid": "382a4c83-a099-412b-9fd9-8d881c0327aa", 00:34:30.043 "is_configured": true, 00:34:30.043 "data_offset": 2048, 00:34:30.043 "data_size": 63488 00:34:30.043 }, 00:34:30.043 { 00:34:30.043 "name": "BaseBdev2", 00:34:30.043 "uuid": "beb21e20-d33b-47a9-af60-033f4ff9c521", 00:34:30.043 "is_configured": true, 00:34:30.043 "data_offset": 2048, 00:34:30.043 "data_size": 63488 00:34:30.043 }, 00:34:30.043 { 00:34:30.043 "name": "BaseBdev3", 00:34:30.043 "uuid": "90c13067-cafe-47f3-9095-7ab0bbc734d8", 00:34:30.043 "is_configured": true, 00:34:30.043 "data_offset": 2048, 00:34:30.043 "data_size": 63488 00:34:30.043 } 00:34:30.043 ] 00:34:30.043 } 00:34:30.043 } 00:34:30.043 }' 00:34:30.043 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:30.043 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:30.043 BaseBdev2 00:34:30.043 BaseBdev3' 00:34:30.043 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:30.043 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:30.043 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:30.043 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:30.043 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:30.043 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.043 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.043 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.301 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.302 [2024-12-09 23:16:10.766577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:30.302 "name": "Existed_Raid", 00:34:30.302 "uuid": "0ee69278-d696-4869-9427-309d47e23e6d", 00:34:30.302 "strip_size_kb": 0, 00:34:30.302 "state": "online", 00:34:30.302 "raid_level": "raid1", 00:34:30.302 "superblock": true, 00:34:30.302 "num_base_bdevs": 3, 00:34:30.302 "num_base_bdevs_discovered": 2, 00:34:30.302 "num_base_bdevs_operational": 2, 00:34:30.302 "base_bdevs_list": [ 00:34:30.302 { 00:34:30.302 "name": null, 00:34:30.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:30.302 "is_configured": false, 00:34:30.302 "data_offset": 0, 00:34:30.302 "data_size": 63488 00:34:30.302 }, 00:34:30.302 { 00:34:30.302 "name": "BaseBdev2", 00:34:30.302 "uuid": "beb21e20-d33b-47a9-af60-033f4ff9c521", 00:34:30.302 "is_configured": true, 00:34:30.302 "data_offset": 2048, 00:34:30.302 "data_size": 63488 00:34:30.302 }, 00:34:30.302 { 00:34:30.302 "name": "BaseBdev3", 00:34:30.302 "uuid": "90c13067-cafe-47f3-9095-7ab0bbc734d8", 00:34:30.302 "is_configured": true, 00:34:30.302 "data_offset": 2048, 00:34:30.302 "data_size": 63488 00:34:30.302 } 00:34:30.302 ] 00:34:30.302 }' 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:30.302 23:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.869 [2024-12-09 23:16:11.340181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.869 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.870 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.870 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:30.870 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:30.870 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:34:30.870 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.870 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.870 [2024-12-09 23:16:11.486225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:30.870 [2024-12-09 23:16:11.486354] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:31.129 [2024-12-09 23:16:11.587316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:31.129 [2024-12-09 23:16:11.587620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:31.129 [2024-12-09 23:16:11.587744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 BaseBdev2 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:34:31.129 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.130 [ 00:34:31.130 { 00:34:31.130 "name": "BaseBdev2", 00:34:31.130 "aliases": [ 00:34:31.130 "9d9cae3e-b477-407f-b881-5d3eb9ca06e5" 00:34:31.130 ], 00:34:31.130 "product_name": "Malloc disk", 00:34:31.130 "block_size": 512, 00:34:31.130 "num_blocks": 65536, 00:34:31.130 "uuid": "9d9cae3e-b477-407f-b881-5d3eb9ca06e5", 00:34:31.130 "assigned_rate_limits": { 00:34:31.130 "rw_ios_per_sec": 0, 00:34:31.130 "rw_mbytes_per_sec": 0, 00:34:31.130 "r_mbytes_per_sec": 0, 00:34:31.130 "w_mbytes_per_sec": 0 00:34:31.130 }, 00:34:31.130 "claimed": false, 00:34:31.130 "zoned": false, 00:34:31.130 "supported_io_types": { 00:34:31.130 "read": true, 00:34:31.130 "write": true, 00:34:31.130 "unmap": true, 00:34:31.130 "flush": true, 00:34:31.130 "reset": true, 00:34:31.130 "nvme_admin": false, 00:34:31.130 "nvme_io": false, 00:34:31.130 "nvme_io_md": false, 00:34:31.130 "write_zeroes": true, 00:34:31.130 "zcopy": true, 00:34:31.130 "get_zone_info": false, 00:34:31.130 "zone_management": false, 00:34:31.130 "zone_append": false, 00:34:31.130 "compare": false, 00:34:31.130 "compare_and_write": false, 00:34:31.130 "abort": true, 00:34:31.130 "seek_hole": false, 00:34:31.130 "seek_data": false, 00:34:31.130 "copy": true, 00:34:31.130 "nvme_iov_md": false 00:34:31.130 }, 00:34:31.130 "memory_domains": [ 00:34:31.130 { 00:34:31.130 "dma_device_id": "system", 00:34:31.130 "dma_device_type": 1 00:34:31.130 }, 00:34:31.130 { 00:34:31.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:31.130 "dma_device_type": 2 00:34:31.130 } 00:34:31.130 ], 00:34:31.130 "driver_specific": {} 00:34:31.130 } 00:34:31.130 ] 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.130 BaseBdev3 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.130 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.388 [ 00:34:31.388 { 00:34:31.388 "name": "BaseBdev3", 00:34:31.388 "aliases": [ 00:34:31.388 "b3d7c5a3-a9e2-4a0f-8b76-449052801022" 00:34:31.388 ], 00:34:31.388 "product_name": "Malloc disk", 00:34:31.388 "block_size": 512, 00:34:31.388 "num_blocks": 65536, 00:34:31.388 "uuid": "b3d7c5a3-a9e2-4a0f-8b76-449052801022", 00:34:31.388 "assigned_rate_limits": { 00:34:31.388 "rw_ios_per_sec": 0, 00:34:31.388 "rw_mbytes_per_sec": 0, 00:34:31.388 "r_mbytes_per_sec": 0, 00:34:31.388 "w_mbytes_per_sec": 0 00:34:31.388 }, 00:34:31.388 "claimed": false, 00:34:31.388 "zoned": false, 00:34:31.388 "supported_io_types": { 00:34:31.388 "read": true, 00:34:31.388 "write": true, 00:34:31.388 "unmap": true, 00:34:31.388 "flush": true, 00:34:31.388 "reset": true, 00:34:31.388 "nvme_admin": false, 00:34:31.388 "nvme_io": false, 00:34:31.388 "nvme_io_md": false, 00:34:31.388 "write_zeroes": true, 00:34:31.388 "zcopy": true, 00:34:31.388 "get_zone_info": false, 00:34:31.388 "zone_management": false, 00:34:31.388 "zone_append": false, 00:34:31.388 "compare": false, 00:34:31.388 "compare_and_write": false, 00:34:31.388 "abort": true, 00:34:31.388 "seek_hole": false, 00:34:31.388 "seek_data": false, 00:34:31.388 "copy": true, 00:34:31.389 "nvme_iov_md": false 00:34:31.389 }, 00:34:31.389 "memory_domains": [ 00:34:31.389 { 00:34:31.389 "dma_device_id": "system", 00:34:31.389 "dma_device_type": 1 00:34:31.389 }, 00:34:31.389 { 00:34:31.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:31.389 "dma_device_type": 2 00:34:31.389 } 00:34:31.389 ], 00:34:31.389 "driver_specific": {} 00:34:31.389 } 00:34:31.389 ] 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.389 [2024-12-09 23:16:11.783118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:31.389 [2024-12-09 23:16:11.783185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:31.389 [2024-12-09 23:16:11.783233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:31.389 [2024-12-09 23:16:11.785701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:31.389 "name": "Existed_Raid", 00:34:31.389 "uuid": "ec4d582d-c1dc-42f2-8291-4a9ec32e7256", 00:34:31.389 "strip_size_kb": 0, 00:34:31.389 "state": "configuring", 00:34:31.389 "raid_level": "raid1", 00:34:31.389 "superblock": true, 00:34:31.389 "num_base_bdevs": 3, 00:34:31.389 "num_base_bdevs_discovered": 2, 00:34:31.389 "num_base_bdevs_operational": 3, 00:34:31.389 "base_bdevs_list": [ 00:34:31.389 { 00:34:31.389 "name": "BaseBdev1", 00:34:31.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:31.389 "is_configured": false, 00:34:31.389 "data_offset": 0, 00:34:31.389 "data_size": 0 00:34:31.389 }, 00:34:31.389 { 00:34:31.389 "name": "BaseBdev2", 00:34:31.389 "uuid": "9d9cae3e-b477-407f-b881-5d3eb9ca06e5", 00:34:31.389 "is_configured": true, 00:34:31.389 "data_offset": 2048, 00:34:31.389 "data_size": 63488 00:34:31.389 }, 00:34:31.389 { 00:34:31.389 "name": "BaseBdev3", 00:34:31.389 "uuid": "b3d7c5a3-a9e2-4a0f-8b76-449052801022", 00:34:31.389 "is_configured": true, 00:34:31.389 "data_offset": 2048, 00:34:31.389 "data_size": 63488 00:34:31.389 } 00:34:31.389 ] 00:34:31.389 }' 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:31.389 23:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.648 [2024-12-09 23:16:12.170572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:31.648 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:31.649 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.649 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.649 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.649 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.649 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:31.649 "name": "Existed_Raid", 00:34:31.649 "uuid": "ec4d582d-c1dc-42f2-8291-4a9ec32e7256", 00:34:31.649 "strip_size_kb": 0, 00:34:31.649 "state": "configuring", 00:34:31.649 "raid_level": "raid1", 00:34:31.649 "superblock": true, 00:34:31.649 "num_base_bdevs": 3, 00:34:31.649 "num_base_bdevs_discovered": 1, 00:34:31.649 "num_base_bdevs_operational": 3, 00:34:31.649 "base_bdevs_list": [ 00:34:31.649 { 00:34:31.649 "name": "BaseBdev1", 00:34:31.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:31.649 "is_configured": false, 00:34:31.649 "data_offset": 0, 00:34:31.649 "data_size": 0 00:34:31.649 }, 00:34:31.649 { 00:34:31.649 "name": null, 00:34:31.649 "uuid": "9d9cae3e-b477-407f-b881-5d3eb9ca06e5", 00:34:31.649 "is_configured": false, 00:34:31.649 "data_offset": 0, 00:34:31.649 "data_size": 63488 00:34:31.649 }, 00:34:31.649 { 00:34:31.649 "name": "BaseBdev3", 00:34:31.649 "uuid": "b3d7c5a3-a9e2-4a0f-8b76-449052801022", 00:34:31.649 "is_configured": true, 00:34:31.649 "data_offset": 2048, 00:34:31.649 "data_size": 63488 00:34:31.649 } 00:34:31.649 ] 00:34:31.649 }' 00:34:31.649 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:31.649 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.225 [2024-12-09 23:16:12.645476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:32.225 BaseBdev1 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.225 [ 00:34:32.225 { 00:34:32.225 "name": "BaseBdev1", 00:34:32.225 "aliases": [ 00:34:32.225 "a247d804-8433-4416-839b-53e7acc214a4" 00:34:32.225 ], 00:34:32.225 "product_name": "Malloc disk", 00:34:32.225 "block_size": 512, 00:34:32.225 "num_blocks": 65536, 00:34:32.225 "uuid": "a247d804-8433-4416-839b-53e7acc214a4", 00:34:32.225 "assigned_rate_limits": { 00:34:32.225 "rw_ios_per_sec": 0, 00:34:32.225 "rw_mbytes_per_sec": 0, 00:34:32.225 "r_mbytes_per_sec": 0, 00:34:32.225 "w_mbytes_per_sec": 0 00:34:32.225 }, 00:34:32.225 "claimed": true, 00:34:32.225 "claim_type": "exclusive_write", 00:34:32.225 "zoned": false, 00:34:32.225 "supported_io_types": { 00:34:32.225 "read": true, 00:34:32.225 "write": true, 00:34:32.225 "unmap": true, 00:34:32.225 "flush": true, 00:34:32.225 "reset": true, 00:34:32.225 "nvme_admin": false, 00:34:32.225 "nvme_io": false, 00:34:32.225 "nvme_io_md": false, 00:34:32.225 "write_zeroes": true, 00:34:32.225 "zcopy": true, 00:34:32.225 "get_zone_info": false, 00:34:32.225 "zone_management": false, 00:34:32.225 "zone_append": false, 00:34:32.225 "compare": false, 00:34:32.225 "compare_and_write": false, 00:34:32.225 "abort": true, 00:34:32.225 "seek_hole": false, 00:34:32.225 "seek_data": false, 00:34:32.225 "copy": true, 00:34:32.225 "nvme_iov_md": false 00:34:32.225 }, 00:34:32.225 "memory_domains": [ 00:34:32.225 { 00:34:32.225 "dma_device_id": "system", 00:34:32.225 "dma_device_type": 1 00:34:32.225 }, 00:34:32.225 { 00:34:32.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:32.225 "dma_device_type": 2 00:34:32.225 } 00:34:32.225 ], 00:34:32.225 "driver_specific": {} 00:34:32.225 } 00:34:32.225 ] 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:32.225 "name": "Existed_Raid", 00:34:32.225 "uuid": "ec4d582d-c1dc-42f2-8291-4a9ec32e7256", 00:34:32.225 "strip_size_kb": 0, 00:34:32.225 "state": "configuring", 00:34:32.225 "raid_level": "raid1", 00:34:32.225 "superblock": true, 00:34:32.225 "num_base_bdevs": 3, 00:34:32.225 "num_base_bdevs_discovered": 2, 00:34:32.225 "num_base_bdevs_operational": 3, 00:34:32.225 "base_bdevs_list": [ 00:34:32.225 { 00:34:32.225 "name": "BaseBdev1", 00:34:32.225 "uuid": "a247d804-8433-4416-839b-53e7acc214a4", 00:34:32.225 "is_configured": true, 00:34:32.225 "data_offset": 2048, 00:34:32.225 "data_size": 63488 00:34:32.225 }, 00:34:32.225 { 00:34:32.225 "name": null, 00:34:32.225 "uuid": "9d9cae3e-b477-407f-b881-5d3eb9ca06e5", 00:34:32.225 "is_configured": false, 00:34:32.225 "data_offset": 0, 00:34:32.225 "data_size": 63488 00:34:32.225 }, 00:34:32.225 { 00:34:32.225 "name": "BaseBdev3", 00:34:32.225 "uuid": "b3d7c5a3-a9e2-4a0f-8b76-449052801022", 00:34:32.225 "is_configured": true, 00:34:32.225 "data_offset": 2048, 00:34:32.225 "data_size": 63488 00:34:32.225 } 00:34:32.225 ] 00:34:32.225 }' 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:32.225 23:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.484 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:32.484 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.484 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.484 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.484 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.484 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:34:32.484 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:34:32.484 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.484 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.484 [2024-12-09 23:16:13.116865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.743 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:32.743 "name": "Existed_Raid", 00:34:32.744 "uuid": "ec4d582d-c1dc-42f2-8291-4a9ec32e7256", 00:34:32.744 "strip_size_kb": 0, 00:34:32.744 "state": "configuring", 00:34:32.744 "raid_level": "raid1", 00:34:32.744 "superblock": true, 00:34:32.744 "num_base_bdevs": 3, 00:34:32.744 "num_base_bdevs_discovered": 1, 00:34:32.744 "num_base_bdevs_operational": 3, 00:34:32.744 "base_bdevs_list": [ 00:34:32.744 { 00:34:32.744 "name": "BaseBdev1", 00:34:32.744 "uuid": "a247d804-8433-4416-839b-53e7acc214a4", 00:34:32.744 "is_configured": true, 00:34:32.744 "data_offset": 2048, 00:34:32.744 "data_size": 63488 00:34:32.744 }, 00:34:32.744 { 00:34:32.744 "name": null, 00:34:32.744 "uuid": "9d9cae3e-b477-407f-b881-5d3eb9ca06e5", 00:34:32.744 "is_configured": false, 00:34:32.744 "data_offset": 0, 00:34:32.744 "data_size": 63488 00:34:32.744 }, 00:34:32.744 { 00:34:32.744 "name": null, 00:34:32.744 "uuid": "b3d7c5a3-a9e2-4a0f-8b76-449052801022", 00:34:32.744 "is_configured": false, 00:34:32.744 "data_offset": 0, 00:34:32.744 "data_size": 63488 00:34:32.744 } 00:34:32.744 ] 00:34:32.744 }' 00:34:32.744 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:32.744 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.003 [2024-12-09 23:16:13.584262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.003 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:33.003 "name": "Existed_Raid", 00:34:33.003 "uuid": "ec4d582d-c1dc-42f2-8291-4a9ec32e7256", 00:34:33.003 "strip_size_kb": 0, 00:34:33.003 "state": "configuring", 00:34:33.003 "raid_level": "raid1", 00:34:33.003 "superblock": true, 00:34:33.003 "num_base_bdevs": 3, 00:34:33.003 "num_base_bdevs_discovered": 2, 00:34:33.003 "num_base_bdevs_operational": 3, 00:34:33.003 "base_bdevs_list": [ 00:34:33.003 { 00:34:33.003 "name": "BaseBdev1", 00:34:33.003 "uuid": "a247d804-8433-4416-839b-53e7acc214a4", 00:34:33.003 "is_configured": true, 00:34:33.003 "data_offset": 2048, 00:34:33.004 "data_size": 63488 00:34:33.004 }, 00:34:33.004 { 00:34:33.004 "name": null, 00:34:33.004 "uuid": "9d9cae3e-b477-407f-b881-5d3eb9ca06e5", 00:34:33.004 "is_configured": false, 00:34:33.004 "data_offset": 0, 00:34:33.004 "data_size": 63488 00:34:33.004 }, 00:34:33.004 { 00:34:33.004 "name": "BaseBdev3", 00:34:33.004 "uuid": "b3d7c5a3-a9e2-4a0f-8b76-449052801022", 00:34:33.004 "is_configured": true, 00:34:33.004 "data_offset": 2048, 00:34:33.004 "data_size": 63488 00:34:33.004 } 00:34:33.004 ] 00:34:33.004 }' 00:34:33.004 23:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:33.004 23:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.571 [2024-12-09 23:16:14.075597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.571 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:33.862 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.862 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:33.862 "name": "Existed_Raid", 00:34:33.862 "uuid": "ec4d582d-c1dc-42f2-8291-4a9ec32e7256", 00:34:33.862 "strip_size_kb": 0, 00:34:33.862 "state": "configuring", 00:34:33.862 "raid_level": "raid1", 00:34:33.862 "superblock": true, 00:34:33.862 "num_base_bdevs": 3, 00:34:33.862 "num_base_bdevs_discovered": 1, 00:34:33.862 "num_base_bdevs_operational": 3, 00:34:33.862 "base_bdevs_list": [ 00:34:33.862 { 00:34:33.862 "name": null, 00:34:33.862 "uuid": "a247d804-8433-4416-839b-53e7acc214a4", 00:34:33.862 "is_configured": false, 00:34:33.862 "data_offset": 0, 00:34:33.862 "data_size": 63488 00:34:33.862 }, 00:34:33.862 { 00:34:33.862 "name": null, 00:34:33.862 "uuid": "9d9cae3e-b477-407f-b881-5d3eb9ca06e5", 00:34:33.862 "is_configured": false, 00:34:33.862 "data_offset": 0, 00:34:33.862 "data_size": 63488 00:34:33.862 }, 00:34:33.862 { 00:34:33.862 "name": "BaseBdev3", 00:34:33.862 "uuid": "b3d7c5a3-a9e2-4a0f-8b76-449052801022", 00:34:33.862 "is_configured": true, 00:34:33.862 "data_offset": 2048, 00:34:33.862 "data_size": 63488 00:34:33.862 } 00:34:33.862 ] 00:34:33.862 }' 00:34:33.862 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:33.862 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.122 [2024-12-09 23:16:14.631524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:34.122 "name": "Existed_Raid", 00:34:34.122 "uuid": "ec4d582d-c1dc-42f2-8291-4a9ec32e7256", 00:34:34.122 "strip_size_kb": 0, 00:34:34.122 "state": "configuring", 00:34:34.122 "raid_level": "raid1", 00:34:34.122 "superblock": true, 00:34:34.122 "num_base_bdevs": 3, 00:34:34.122 "num_base_bdevs_discovered": 2, 00:34:34.122 "num_base_bdevs_operational": 3, 00:34:34.122 "base_bdevs_list": [ 00:34:34.122 { 00:34:34.122 "name": null, 00:34:34.122 "uuid": "a247d804-8433-4416-839b-53e7acc214a4", 00:34:34.122 "is_configured": false, 00:34:34.122 "data_offset": 0, 00:34:34.122 "data_size": 63488 00:34:34.122 }, 00:34:34.122 { 00:34:34.122 "name": "BaseBdev2", 00:34:34.122 "uuid": "9d9cae3e-b477-407f-b881-5d3eb9ca06e5", 00:34:34.122 "is_configured": true, 00:34:34.122 "data_offset": 2048, 00:34:34.122 "data_size": 63488 00:34:34.122 }, 00:34:34.122 { 00:34:34.122 "name": "BaseBdev3", 00:34:34.122 "uuid": "b3d7c5a3-a9e2-4a0f-8b76-449052801022", 00:34:34.122 "is_configured": true, 00:34:34.122 "data_offset": 2048, 00:34:34.122 "data_size": 63488 00:34:34.122 } 00:34:34.122 ] 00:34:34.122 }' 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:34.122 23:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a247d804-8433-4416-839b-53e7acc214a4 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.689 [2024-12-09 23:16:15.171684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:34.689 [2024-12-09 23:16:15.171948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:34.689 [2024-12-09 23:16:15.171964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:34.689 [2024-12-09 23:16:15.172260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:34.689 [2024-12-09 23:16:15.172442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:34.689 [2024-12-09 23:16:15.172466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:34:34.689 NewBaseBdev 00:34:34.689 [2024-12-09 23:16:15.172621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.689 [ 00:34:34.689 { 00:34:34.689 "name": "NewBaseBdev", 00:34:34.689 "aliases": [ 00:34:34.689 "a247d804-8433-4416-839b-53e7acc214a4" 00:34:34.689 ], 00:34:34.689 "product_name": "Malloc disk", 00:34:34.689 "block_size": 512, 00:34:34.689 "num_blocks": 65536, 00:34:34.689 "uuid": "a247d804-8433-4416-839b-53e7acc214a4", 00:34:34.689 "assigned_rate_limits": { 00:34:34.689 "rw_ios_per_sec": 0, 00:34:34.689 "rw_mbytes_per_sec": 0, 00:34:34.689 "r_mbytes_per_sec": 0, 00:34:34.689 "w_mbytes_per_sec": 0 00:34:34.689 }, 00:34:34.689 "claimed": true, 00:34:34.689 "claim_type": "exclusive_write", 00:34:34.689 "zoned": false, 00:34:34.689 "supported_io_types": { 00:34:34.689 "read": true, 00:34:34.689 "write": true, 00:34:34.689 "unmap": true, 00:34:34.689 "flush": true, 00:34:34.689 "reset": true, 00:34:34.689 "nvme_admin": false, 00:34:34.689 "nvme_io": false, 00:34:34.689 "nvme_io_md": false, 00:34:34.689 "write_zeroes": true, 00:34:34.689 "zcopy": true, 00:34:34.689 "get_zone_info": false, 00:34:34.689 "zone_management": false, 00:34:34.689 "zone_append": false, 00:34:34.689 "compare": false, 00:34:34.689 "compare_and_write": false, 00:34:34.689 "abort": true, 00:34:34.689 "seek_hole": false, 00:34:34.689 "seek_data": false, 00:34:34.689 "copy": true, 00:34:34.689 "nvme_iov_md": false 00:34:34.689 }, 00:34:34.689 "memory_domains": [ 00:34:34.689 { 00:34:34.689 "dma_device_id": "system", 00:34:34.689 "dma_device_type": 1 00:34:34.689 }, 00:34:34.689 { 00:34:34.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:34.689 "dma_device_type": 2 00:34:34.689 } 00:34:34.689 ], 00:34:34.689 "driver_specific": {} 00:34:34.689 } 00:34:34.689 ] 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.689 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:34.689 "name": "Existed_Raid", 00:34:34.689 "uuid": "ec4d582d-c1dc-42f2-8291-4a9ec32e7256", 00:34:34.690 "strip_size_kb": 0, 00:34:34.690 "state": "online", 00:34:34.690 "raid_level": "raid1", 00:34:34.690 "superblock": true, 00:34:34.690 "num_base_bdevs": 3, 00:34:34.690 "num_base_bdevs_discovered": 3, 00:34:34.690 "num_base_bdevs_operational": 3, 00:34:34.690 "base_bdevs_list": [ 00:34:34.690 { 00:34:34.690 "name": "NewBaseBdev", 00:34:34.690 "uuid": "a247d804-8433-4416-839b-53e7acc214a4", 00:34:34.690 "is_configured": true, 00:34:34.690 "data_offset": 2048, 00:34:34.690 "data_size": 63488 00:34:34.690 }, 00:34:34.690 { 00:34:34.690 "name": "BaseBdev2", 00:34:34.690 "uuid": "9d9cae3e-b477-407f-b881-5d3eb9ca06e5", 00:34:34.690 "is_configured": true, 00:34:34.690 "data_offset": 2048, 00:34:34.690 "data_size": 63488 00:34:34.690 }, 00:34:34.690 { 00:34:34.690 "name": "BaseBdev3", 00:34:34.690 "uuid": "b3d7c5a3-a9e2-4a0f-8b76-449052801022", 00:34:34.690 "is_configured": true, 00:34:34.690 "data_offset": 2048, 00:34:34.690 "data_size": 63488 00:34:34.690 } 00:34:34.690 ] 00:34:34.690 }' 00:34:34.690 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:34.690 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:35.257 [2024-12-09 23:16:15.639448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.257 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:35.257 "name": "Existed_Raid", 00:34:35.257 "aliases": [ 00:34:35.257 "ec4d582d-c1dc-42f2-8291-4a9ec32e7256" 00:34:35.257 ], 00:34:35.257 "product_name": "Raid Volume", 00:34:35.257 "block_size": 512, 00:34:35.257 "num_blocks": 63488, 00:34:35.257 "uuid": "ec4d582d-c1dc-42f2-8291-4a9ec32e7256", 00:34:35.257 "assigned_rate_limits": { 00:34:35.257 "rw_ios_per_sec": 0, 00:34:35.257 "rw_mbytes_per_sec": 0, 00:34:35.257 "r_mbytes_per_sec": 0, 00:34:35.257 "w_mbytes_per_sec": 0 00:34:35.257 }, 00:34:35.257 "claimed": false, 00:34:35.257 "zoned": false, 00:34:35.257 "supported_io_types": { 00:34:35.257 "read": true, 00:34:35.257 "write": true, 00:34:35.257 "unmap": false, 00:34:35.257 "flush": false, 00:34:35.257 "reset": true, 00:34:35.257 "nvme_admin": false, 00:34:35.258 "nvme_io": false, 00:34:35.258 "nvme_io_md": false, 00:34:35.258 "write_zeroes": true, 00:34:35.258 "zcopy": false, 00:34:35.258 "get_zone_info": false, 00:34:35.258 "zone_management": false, 00:34:35.258 "zone_append": false, 00:34:35.258 "compare": false, 00:34:35.258 "compare_and_write": false, 00:34:35.258 "abort": false, 00:34:35.258 "seek_hole": false, 00:34:35.258 "seek_data": false, 00:34:35.258 "copy": false, 00:34:35.258 "nvme_iov_md": false 00:34:35.258 }, 00:34:35.258 "memory_domains": [ 00:34:35.258 { 00:34:35.258 "dma_device_id": "system", 00:34:35.258 "dma_device_type": 1 00:34:35.258 }, 00:34:35.258 { 00:34:35.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:35.258 "dma_device_type": 2 00:34:35.258 }, 00:34:35.258 { 00:34:35.258 "dma_device_id": "system", 00:34:35.258 "dma_device_type": 1 00:34:35.258 }, 00:34:35.258 { 00:34:35.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:35.258 "dma_device_type": 2 00:34:35.258 }, 00:34:35.258 { 00:34:35.258 "dma_device_id": "system", 00:34:35.258 "dma_device_type": 1 00:34:35.258 }, 00:34:35.258 { 00:34:35.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:35.258 "dma_device_type": 2 00:34:35.258 } 00:34:35.258 ], 00:34:35.258 "driver_specific": { 00:34:35.258 "raid": { 00:34:35.258 "uuid": "ec4d582d-c1dc-42f2-8291-4a9ec32e7256", 00:34:35.258 "strip_size_kb": 0, 00:34:35.258 "state": "online", 00:34:35.258 "raid_level": "raid1", 00:34:35.258 "superblock": true, 00:34:35.258 "num_base_bdevs": 3, 00:34:35.258 "num_base_bdevs_discovered": 3, 00:34:35.258 "num_base_bdevs_operational": 3, 00:34:35.258 "base_bdevs_list": [ 00:34:35.258 { 00:34:35.258 "name": "NewBaseBdev", 00:34:35.258 "uuid": "a247d804-8433-4416-839b-53e7acc214a4", 00:34:35.258 "is_configured": true, 00:34:35.258 "data_offset": 2048, 00:34:35.258 "data_size": 63488 00:34:35.258 }, 00:34:35.258 { 00:34:35.258 "name": "BaseBdev2", 00:34:35.258 "uuid": "9d9cae3e-b477-407f-b881-5d3eb9ca06e5", 00:34:35.258 "is_configured": true, 00:34:35.258 "data_offset": 2048, 00:34:35.258 "data_size": 63488 00:34:35.258 }, 00:34:35.258 { 00:34:35.258 "name": "BaseBdev3", 00:34:35.258 "uuid": "b3d7c5a3-a9e2-4a0f-8b76-449052801022", 00:34:35.258 "is_configured": true, 00:34:35.258 "data_offset": 2048, 00:34:35.258 "data_size": 63488 00:34:35.258 } 00:34:35.258 ] 00:34:35.258 } 00:34:35.258 } 00:34:35.258 }' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:34:35.258 BaseBdev2 00:34:35.258 BaseBdev3' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.258 [2024-12-09 23:16:15.886763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:35.258 [2024-12-09 23:16:15.886803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:35.258 [2024-12-09 23:16:15.886886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:35.258 [2024-12-09 23:16:15.887189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:35.258 [2024-12-09 23:16:15.887212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67900 00:34:35.258 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67900 ']' 00:34:35.518 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67900 00:34:35.518 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:34:35.518 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.518 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67900 00:34:35.518 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:35.518 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:35.518 killing process with pid 67900 00:34:35.518 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67900' 00:34:35.518 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67900 00:34:35.518 [2024-12-09 23:16:15.937752] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:35.518 23:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67900 00:34:35.776 [2024-12-09 23:16:16.248514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:37.159 23:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:34:37.159 00:34:37.159 real 0m10.249s 00:34:37.159 user 0m16.159s 00:34:37.159 sys 0m1.984s 00:34:37.159 23:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.159 23:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:37.159 ************************************ 00:34:37.160 END TEST raid_state_function_test_sb 00:34:37.160 ************************************ 00:34:37.160 23:16:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:34:37.160 23:16:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:37.160 23:16:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.160 23:16:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:37.160 ************************************ 00:34:37.160 START TEST raid_superblock_test 00:34:37.160 ************************************ 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68515 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68515 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68515 ']' 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:37.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:37.160 23:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.160 [2024-12-09 23:16:17.603915] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:34:37.160 [2024-12-09 23:16:17.604043] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68515 ] 00:34:37.419 [2024-12-09 23:16:17.809866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.419 [2024-12-09 23:16:17.945382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.697 [2024-12-09 23:16:18.160095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:37.697 [2024-12-09 23:16:18.160162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.956 malloc1 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.956 [2024-12-09 23:16:18.565866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:37.956 [2024-12-09 23:16:18.565931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:37.956 [2024-12-09 23:16:18.565958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:37.956 [2024-12-09 23:16:18.565971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:37.956 [2024-12-09 23:16:18.568677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:37.956 [2024-12-09 23:16:18.568716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:37.956 pt1 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.956 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.215 malloc2 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.215 [2024-12-09 23:16:18.623799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:38.215 [2024-12-09 23:16:18.623861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.215 [2024-12-09 23:16:18.623905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:38.215 [2024-12-09 23:16:18.623917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.215 [2024-12-09 23:16:18.626528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.215 [2024-12-09 23:16:18.626567] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:38.215 pt2 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.215 malloc3 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.215 [2024-12-09 23:16:18.696108] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:38.215 [2024-12-09 23:16:18.696163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.215 [2024-12-09 23:16:18.696187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:38.215 [2024-12-09 23:16:18.696199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.215 [2024-12-09 23:16:18.698720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.215 [2024-12-09 23:16:18.698756] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:38.215 pt3 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.215 [2024-12-09 23:16:18.708136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:38.215 [2024-12-09 23:16:18.710339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:38.215 [2024-12-09 23:16:18.710425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:38.215 [2024-12-09 23:16:18.710582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:38.215 [2024-12-09 23:16:18.710604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:38.215 [2024-12-09 23:16:18.710891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:38.215 [2024-12-09 23:16:18.711071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:38.215 [2024-12-09 23:16:18.711096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:38.215 [2024-12-09 23:16:18.711272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.215 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:38.215 "name": "raid_bdev1", 00:34:38.215 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:38.215 "strip_size_kb": 0, 00:34:38.215 "state": "online", 00:34:38.215 "raid_level": "raid1", 00:34:38.215 "superblock": true, 00:34:38.215 "num_base_bdevs": 3, 00:34:38.215 "num_base_bdevs_discovered": 3, 00:34:38.215 "num_base_bdevs_operational": 3, 00:34:38.215 "base_bdevs_list": [ 00:34:38.215 { 00:34:38.215 "name": "pt1", 00:34:38.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:38.215 "is_configured": true, 00:34:38.215 "data_offset": 2048, 00:34:38.215 "data_size": 63488 00:34:38.215 }, 00:34:38.215 { 00:34:38.215 "name": "pt2", 00:34:38.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:38.215 "is_configured": true, 00:34:38.215 "data_offset": 2048, 00:34:38.215 "data_size": 63488 00:34:38.215 }, 00:34:38.215 { 00:34:38.215 "name": "pt3", 00:34:38.216 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:38.216 "is_configured": true, 00:34:38.216 "data_offset": 2048, 00:34:38.216 "data_size": 63488 00:34:38.216 } 00:34:38.216 ] 00:34:38.216 }' 00:34:38.216 23:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:38.216 23:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.475 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:38.475 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:38.475 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:38.475 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:38.475 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:38.475 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:38.475 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:38.475 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.475 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.475 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:38.475 [2024-12-09 23:16:19.095858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:38.734 "name": "raid_bdev1", 00:34:38.734 "aliases": [ 00:34:38.734 "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16" 00:34:38.734 ], 00:34:38.734 "product_name": "Raid Volume", 00:34:38.734 "block_size": 512, 00:34:38.734 "num_blocks": 63488, 00:34:38.734 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:38.734 "assigned_rate_limits": { 00:34:38.734 "rw_ios_per_sec": 0, 00:34:38.734 "rw_mbytes_per_sec": 0, 00:34:38.734 "r_mbytes_per_sec": 0, 00:34:38.734 "w_mbytes_per_sec": 0 00:34:38.734 }, 00:34:38.734 "claimed": false, 00:34:38.734 "zoned": false, 00:34:38.734 "supported_io_types": { 00:34:38.734 "read": true, 00:34:38.734 "write": true, 00:34:38.734 "unmap": false, 00:34:38.734 "flush": false, 00:34:38.734 "reset": true, 00:34:38.734 "nvme_admin": false, 00:34:38.734 "nvme_io": false, 00:34:38.734 "nvme_io_md": false, 00:34:38.734 "write_zeroes": true, 00:34:38.734 "zcopy": false, 00:34:38.734 "get_zone_info": false, 00:34:38.734 "zone_management": false, 00:34:38.734 "zone_append": false, 00:34:38.734 "compare": false, 00:34:38.734 "compare_and_write": false, 00:34:38.734 "abort": false, 00:34:38.734 "seek_hole": false, 00:34:38.734 "seek_data": false, 00:34:38.734 "copy": false, 00:34:38.734 "nvme_iov_md": false 00:34:38.734 }, 00:34:38.734 "memory_domains": [ 00:34:38.734 { 00:34:38.734 "dma_device_id": "system", 00:34:38.734 "dma_device_type": 1 00:34:38.734 }, 00:34:38.734 { 00:34:38.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:38.734 "dma_device_type": 2 00:34:38.734 }, 00:34:38.734 { 00:34:38.734 "dma_device_id": "system", 00:34:38.734 "dma_device_type": 1 00:34:38.734 }, 00:34:38.734 { 00:34:38.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:38.734 "dma_device_type": 2 00:34:38.734 }, 00:34:38.734 { 00:34:38.734 "dma_device_id": "system", 00:34:38.734 "dma_device_type": 1 00:34:38.734 }, 00:34:38.734 { 00:34:38.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:38.734 "dma_device_type": 2 00:34:38.734 } 00:34:38.734 ], 00:34:38.734 "driver_specific": { 00:34:38.734 "raid": { 00:34:38.734 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:38.734 "strip_size_kb": 0, 00:34:38.734 "state": "online", 00:34:38.734 "raid_level": "raid1", 00:34:38.734 "superblock": true, 00:34:38.734 "num_base_bdevs": 3, 00:34:38.734 "num_base_bdevs_discovered": 3, 00:34:38.734 "num_base_bdevs_operational": 3, 00:34:38.734 "base_bdevs_list": [ 00:34:38.734 { 00:34:38.734 "name": "pt1", 00:34:38.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:38.734 "is_configured": true, 00:34:38.734 "data_offset": 2048, 00:34:38.734 "data_size": 63488 00:34:38.734 }, 00:34:38.734 { 00:34:38.734 "name": "pt2", 00:34:38.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:38.734 "is_configured": true, 00:34:38.734 "data_offset": 2048, 00:34:38.734 "data_size": 63488 00:34:38.734 }, 00:34:38.734 { 00:34:38.734 "name": "pt3", 00:34:38.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:38.734 "is_configured": true, 00:34:38.734 "data_offset": 2048, 00:34:38.734 "data_size": 63488 00:34:38.734 } 00:34:38.734 ] 00:34:38.734 } 00:34:38.734 } 00:34:38.734 }' 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:38.734 pt2 00:34:38.734 pt3' 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.734 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.735 [2024-12-09 23:16:19.331585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:38.735 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=19d049b8-a393-4a4c-bd2a-fb2ce3c21a16 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 19d049b8-a393-4a4c-bd2a-fb2ce3c21a16 ']' 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.994 [2024-12-09 23:16:19.391231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:38.994 [2024-12-09 23:16:19.391268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:38.994 [2024-12-09 23:16:19.391354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:38.994 [2024-12-09 23:16:19.391447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:38.994 [2024-12-09 23:16:19.391460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.994 [2024-12-09 23:16:19.543074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:38.994 [2024-12-09 23:16:19.545176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:38.994 [2024-12-09 23:16:19.545244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:38.994 [2024-12-09 23:16:19.545296] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:38.994 [2024-12-09 23:16:19.545349] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:38.994 [2024-12-09 23:16:19.545371] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:34:38.994 [2024-12-09 23:16:19.545402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:38.994 [2024-12-09 23:16:19.545415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:34:38.994 request: 00:34:38.994 { 00:34:38.994 "name": "raid_bdev1", 00:34:38.994 "raid_level": "raid1", 00:34:38.994 "base_bdevs": [ 00:34:38.994 "malloc1", 00:34:38.994 "malloc2", 00:34:38.994 "malloc3" 00:34:38.994 ], 00:34:38.994 "superblock": false, 00:34:38.994 "method": "bdev_raid_create", 00:34:38.994 "req_id": 1 00:34:38.994 } 00:34:38.994 Got JSON-RPC error response 00:34:38.994 response: 00:34:38.994 { 00:34:38.994 "code": -17, 00:34:38.994 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:38.994 } 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:38.994 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.995 [2024-12-09 23:16:19.606962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:38.995 [2024-12-09 23:16:19.607185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.995 [2024-12-09 23:16:19.607222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:34:38.995 [2024-12-09 23:16:19.607235] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.995 [2024-12-09 23:16:19.609860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.995 [2024-12-09 23:16:19.609903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:38.995 [2024-12-09 23:16:19.609997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:38.995 [2024-12-09 23:16:19.610060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:38.995 pt1 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.995 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.253 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.253 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:39.253 "name": "raid_bdev1", 00:34:39.253 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:39.253 "strip_size_kb": 0, 00:34:39.253 "state": "configuring", 00:34:39.253 "raid_level": "raid1", 00:34:39.253 "superblock": true, 00:34:39.253 "num_base_bdevs": 3, 00:34:39.253 "num_base_bdevs_discovered": 1, 00:34:39.253 "num_base_bdevs_operational": 3, 00:34:39.253 "base_bdevs_list": [ 00:34:39.253 { 00:34:39.253 "name": "pt1", 00:34:39.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:39.253 "is_configured": true, 00:34:39.253 "data_offset": 2048, 00:34:39.253 "data_size": 63488 00:34:39.253 }, 00:34:39.253 { 00:34:39.253 "name": null, 00:34:39.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:39.253 "is_configured": false, 00:34:39.253 "data_offset": 2048, 00:34:39.253 "data_size": 63488 00:34:39.253 }, 00:34:39.253 { 00:34:39.253 "name": null, 00:34:39.253 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:39.253 "is_configured": false, 00:34:39.253 "data_offset": 2048, 00:34:39.253 "data_size": 63488 00:34:39.253 } 00:34:39.253 ] 00:34:39.253 }' 00:34:39.253 23:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:39.253 23:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.512 [2024-12-09 23:16:20.098367] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:39.512 [2024-12-09 23:16:20.098599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:39.512 [2024-12-09 23:16:20.098640] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:34:39.512 [2024-12-09 23:16:20.098653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:39.512 [2024-12-09 23:16:20.099190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:39.512 [2024-12-09 23:16:20.099212] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:39.512 [2024-12-09 23:16:20.099310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:39.512 [2024-12-09 23:16:20.099335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:39.512 pt2 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.512 [2024-12-09 23:16:20.110364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.512 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.770 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:39.770 "name": "raid_bdev1", 00:34:39.770 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:39.770 "strip_size_kb": 0, 00:34:39.770 "state": "configuring", 00:34:39.770 "raid_level": "raid1", 00:34:39.770 "superblock": true, 00:34:39.770 "num_base_bdevs": 3, 00:34:39.770 "num_base_bdevs_discovered": 1, 00:34:39.770 "num_base_bdevs_operational": 3, 00:34:39.770 "base_bdevs_list": [ 00:34:39.770 { 00:34:39.770 "name": "pt1", 00:34:39.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:39.770 "is_configured": true, 00:34:39.770 "data_offset": 2048, 00:34:39.770 "data_size": 63488 00:34:39.770 }, 00:34:39.770 { 00:34:39.770 "name": null, 00:34:39.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:39.770 "is_configured": false, 00:34:39.770 "data_offset": 0, 00:34:39.770 "data_size": 63488 00:34:39.770 }, 00:34:39.770 { 00:34:39.770 "name": null, 00:34:39.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:39.770 "is_configured": false, 00:34:39.770 "data_offset": 2048, 00:34:39.770 "data_size": 63488 00:34:39.770 } 00:34:39.770 ] 00:34:39.770 }' 00:34:39.770 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:39.770 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.028 [2024-12-09 23:16:20.533918] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:40.028 [2024-12-09 23:16:20.534157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:40.028 [2024-12-09 23:16:20.534189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:34:40.028 [2024-12-09 23:16:20.534216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:40.028 [2024-12-09 23:16:20.534747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:40.028 [2024-12-09 23:16:20.534772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:40.028 [2024-12-09 23:16:20.534868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:40.028 [2024-12-09 23:16:20.534907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:40.028 pt2 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.028 [2024-12-09 23:16:20.545885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:40.028 [2024-12-09 23:16:20.545948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:40.028 [2024-12-09 23:16:20.545968] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:40.028 [2024-12-09 23:16:20.545981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:40.028 [2024-12-09 23:16:20.546492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:40.028 [2024-12-09 23:16:20.546521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:40.028 [2024-12-09 23:16:20.546602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:40.028 [2024-12-09 23:16:20.546631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:40.028 [2024-12-09 23:16:20.546764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:40.028 [2024-12-09 23:16:20.546780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:40.028 [2024-12-09 23:16:20.547042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:40.028 [2024-12-09 23:16:20.547213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:40.028 [2024-12-09 23:16:20.547224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:40.028 [2024-12-09 23:16:20.547389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:40.028 pt3 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.028 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:40.028 "name": "raid_bdev1", 00:34:40.028 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:40.028 "strip_size_kb": 0, 00:34:40.028 "state": "online", 00:34:40.028 "raid_level": "raid1", 00:34:40.028 "superblock": true, 00:34:40.028 "num_base_bdevs": 3, 00:34:40.028 "num_base_bdevs_discovered": 3, 00:34:40.028 "num_base_bdevs_operational": 3, 00:34:40.028 "base_bdevs_list": [ 00:34:40.028 { 00:34:40.028 "name": "pt1", 00:34:40.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:40.028 "is_configured": true, 00:34:40.028 "data_offset": 2048, 00:34:40.028 "data_size": 63488 00:34:40.028 }, 00:34:40.028 { 00:34:40.028 "name": "pt2", 00:34:40.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:40.028 "is_configured": true, 00:34:40.028 "data_offset": 2048, 00:34:40.028 "data_size": 63488 00:34:40.028 }, 00:34:40.028 { 00:34:40.028 "name": "pt3", 00:34:40.029 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:40.029 "is_configured": true, 00:34:40.029 "data_offset": 2048, 00:34:40.029 "data_size": 63488 00:34:40.029 } 00:34:40.029 ] 00:34:40.029 }' 00:34:40.029 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:40.029 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.603 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:40.603 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:40.603 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:40.603 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:40.603 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:40.603 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:40.603 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:40.603 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.603 23:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.603 23:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:40.603 [2024-12-09 23:16:20.985801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:40.603 "name": "raid_bdev1", 00:34:40.603 "aliases": [ 00:34:40.603 "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16" 00:34:40.603 ], 00:34:40.603 "product_name": "Raid Volume", 00:34:40.603 "block_size": 512, 00:34:40.603 "num_blocks": 63488, 00:34:40.603 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:40.603 "assigned_rate_limits": { 00:34:40.603 "rw_ios_per_sec": 0, 00:34:40.603 "rw_mbytes_per_sec": 0, 00:34:40.603 "r_mbytes_per_sec": 0, 00:34:40.603 "w_mbytes_per_sec": 0 00:34:40.603 }, 00:34:40.603 "claimed": false, 00:34:40.603 "zoned": false, 00:34:40.603 "supported_io_types": { 00:34:40.603 "read": true, 00:34:40.603 "write": true, 00:34:40.603 "unmap": false, 00:34:40.603 "flush": false, 00:34:40.603 "reset": true, 00:34:40.603 "nvme_admin": false, 00:34:40.603 "nvme_io": false, 00:34:40.603 "nvme_io_md": false, 00:34:40.603 "write_zeroes": true, 00:34:40.603 "zcopy": false, 00:34:40.603 "get_zone_info": false, 00:34:40.603 "zone_management": false, 00:34:40.603 "zone_append": false, 00:34:40.603 "compare": false, 00:34:40.603 "compare_and_write": false, 00:34:40.603 "abort": false, 00:34:40.603 "seek_hole": false, 00:34:40.603 "seek_data": false, 00:34:40.603 "copy": false, 00:34:40.603 "nvme_iov_md": false 00:34:40.603 }, 00:34:40.603 "memory_domains": [ 00:34:40.603 { 00:34:40.603 "dma_device_id": "system", 00:34:40.603 "dma_device_type": 1 00:34:40.603 }, 00:34:40.603 { 00:34:40.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:40.603 "dma_device_type": 2 00:34:40.603 }, 00:34:40.603 { 00:34:40.603 "dma_device_id": "system", 00:34:40.603 "dma_device_type": 1 00:34:40.603 }, 00:34:40.603 { 00:34:40.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:40.603 "dma_device_type": 2 00:34:40.603 }, 00:34:40.603 { 00:34:40.603 "dma_device_id": "system", 00:34:40.603 "dma_device_type": 1 00:34:40.603 }, 00:34:40.603 { 00:34:40.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:40.603 "dma_device_type": 2 00:34:40.603 } 00:34:40.603 ], 00:34:40.603 "driver_specific": { 00:34:40.603 "raid": { 00:34:40.603 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:40.603 "strip_size_kb": 0, 00:34:40.603 "state": "online", 00:34:40.603 "raid_level": "raid1", 00:34:40.603 "superblock": true, 00:34:40.603 "num_base_bdevs": 3, 00:34:40.603 "num_base_bdevs_discovered": 3, 00:34:40.603 "num_base_bdevs_operational": 3, 00:34:40.603 "base_bdevs_list": [ 00:34:40.603 { 00:34:40.603 "name": "pt1", 00:34:40.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:40.603 "is_configured": true, 00:34:40.603 "data_offset": 2048, 00:34:40.603 "data_size": 63488 00:34:40.603 }, 00:34:40.603 { 00:34:40.603 "name": "pt2", 00:34:40.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:40.603 "is_configured": true, 00:34:40.603 "data_offset": 2048, 00:34:40.603 "data_size": 63488 00:34:40.603 }, 00:34:40.603 { 00:34:40.603 "name": "pt3", 00:34:40.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:40.603 "is_configured": true, 00:34:40.603 "data_offset": 2048, 00:34:40.603 "data_size": 63488 00:34:40.603 } 00:34:40.603 ] 00:34:40.603 } 00:34:40.603 } 00:34:40.603 }' 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:40.603 pt2 00:34:40.603 pt3' 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.603 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.865 [2024-12-09 23:16:21.261396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 19d049b8-a393-4a4c-bd2a-fb2ce3c21a16 '!=' 19d049b8-a393-4a4c-bd2a-fb2ce3c21a16 ']' 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.865 [2024-12-09 23:16:21.305108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:40.865 "name": "raid_bdev1", 00:34:40.865 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:40.865 "strip_size_kb": 0, 00:34:40.865 "state": "online", 00:34:40.865 "raid_level": "raid1", 00:34:40.865 "superblock": true, 00:34:40.865 "num_base_bdevs": 3, 00:34:40.865 "num_base_bdevs_discovered": 2, 00:34:40.865 "num_base_bdevs_operational": 2, 00:34:40.865 "base_bdevs_list": [ 00:34:40.865 { 00:34:40.865 "name": null, 00:34:40.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.865 "is_configured": false, 00:34:40.865 "data_offset": 0, 00:34:40.865 "data_size": 63488 00:34:40.865 }, 00:34:40.865 { 00:34:40.865 "name": "pt2", 00:34:40.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:40.865 "is_configured": true, 00:34:40.865 "data_offset": 2048, 00:34:40.865 "data_size": 63488 00:34:40.865 }, 00:34:40.865 { 00:34:40.865 "name": "pt3", 00:34:40.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:40.865 "is_configured": true, 00:34:40.865 "data_offset": 2048, 00:34:40.865 "data_size": 63488 00:34:40.865 } 00:34:40.865 ] 00:34:40.865 }' 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:40.865 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.130 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:41.130 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.130 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.130 [2024-12-09 23:16:21.708491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:41.130 [2024-12-09 23:16:21.708521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:41.130 [2024-12-09 23:16:21.708605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:41.130 [2024-12-09 23:16:21.708666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:41.130 [2024-12-09 23:16:21.708683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:41.130 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.130 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.130 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:34:41.130 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.130 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.130 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.390 [2024-12-09 23:16:21.796316] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:41.390 [2024-12-09 23:16:21.796379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:41.390 [2024-12-09 23:16:21.796412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:34:41.390 [2024-12-09 23:16:21.796426] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:41.390 [2024-12-09 23:16:21.798828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:41.390 [2024-12-09 23:16:21.798996] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:41.390 [2024-12-09 23:16:21.799098] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:41.390 [2024-12-09 23:16:21.799155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:41.390 pt2 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.390 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:41.390 "name": "raid_bdev1", 00:34:41.390 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:41.390 "strip_size_kb": 0, 00:34:41.390 "state": "configuring", 00:34:41.390 "raid_level": "raid1", 00:34:41.390 "superblock": true, 00:34:41.390 "num_base_bdevs": 3, 00:34:41.390 "num_base_bdevs_discovered": 1, 00:34:41.390 "num_base_bdevs_operational": 2, 00:34:41.390 "base_bdevs_list": [ 00:34:41.390 { 00:34:41.390 "name": null, 00:34:41.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:41.390 "is_configured": false, 00:34:41.390 "data_offset": 2048, 00:34:41.390 "data_size": 63488 00:34:41.390 }, 00:34:41.390 { 00:34:41.390 "name": "pt2", 00:34:41.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:41.390 "is_configured": true, 00:34:41.390 "data_offset": 2048, 00:34:41.390 "data_size": 63488 00:34:41.390 }, 00:34:41.390 { 00:34:41.390 "name": null, 00:34:41.391 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:41.391 "is_configured": false, 00:34:41.391 "data_offset": 2048, 00:34:41.391 "data_size": 63488 00:34:41.391 } 00:34:41.391 ] 00:34:41.391 }' 00:34:41.391 23:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:41.391 23:16:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.650 [2024-12-09 23:16:22.211762] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:41.650 [2024-12-09 23:16:22.211837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:41.650 [2024-12-09 23:16:22.211860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:41.650 [2024-12-09 23:16:22.211875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:41.650 [2024-12-09 23:16:22.212357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:41.650 [2024-12-09 23:16:22.212381] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:41.650 [2024-12-09 23:16:22.212501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:41.650 [2024-12-09 23:16:22.212532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:41.650 [2024-12-09 23:16:22.212649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:41.650 [2024-12-09 23:16:22.212662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:41.650 [2024-12-09 23:16:22.212940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:41.650 [2024-12-09 23:16:22.213094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:41.650 [2024-12-09 23:16:22.213111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:41.650 [2024-12-09 23:16:22.213247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:41.650 pt3 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:41.650 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:41.651 "name": "raid_bdev1", 00:34:41.651 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:41.651 "strip_size_kb": 0, 00:34:41.651 "state": "online", 00:34:41.651 "raid_level": "raid1", 00:34:41.651 "superblock": true, 00:34:41.651 "num_base_bdevs": 3, 00:34:41.651 "num_base_bdevs_discovered": 2, 00:34:41.651 "num_base_bdevs_operational": 2, 00:34:41.651 "base_bdevs_list": [ 00:34:41.651 { 00:34:41.651 "name": null, 00:34:41.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:41.651 "is_configured": false, 00:34:41.651 "data_offset": 2048, 00:34:41.651 "data_size": 63488 00:34:41.651 }, 00:34:41.651 { 00:34:41.651 "name": "pt2", 00:34:41.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:41.651 "is_configured": true, 00:34:41.651 "data_offset": 2048, 00:34:41.651 "data_size": 63488 00:34:41.651 }, 00:34:41.651 { 00:34:41.651 "name": "pt3", 00:34:41.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:41.651 "is_configured": true, 00:34:41.651 "data_offset": 2048, 00:34:41.651 "data_size": 63488 00:34:41.651 } 00:34:41.651 ] 00:34:41.651 }' 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:41.651 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.230 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:42.230 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.230 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.230 [2024-12-09 23:16:22.591341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:42.230 [2024-12-09 23:16:22.591520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:42.230 [2024-12-09 23:16:22.591718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:42.230 [2024-12-09 23:16:22.591883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:42.230 [2024-12-09 23:16:22.591999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:42.230 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.231 [2024-12-09 23:16:22.659273] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:42.231 [2024-12-09 23:16:22.659352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:42.231 [2024-12-09 23:16:22.659375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:34:42.231 [2024-12-09 23:16:22.659386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:42.231 [2024-12-09 23:16:22.661864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:42.231 [2024-12-09 23:16:22.661908] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:42.231 [2024-12-09 23:16:22.661997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:42.231 [2024-12-09 23:16:22.662067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:42.231 [2024-12-09 23:16:22.662259] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:42.231 [2024-12-09 23:16:22.662272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:42.231 [2024-12-09 23:16:22.662291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:34:42.231 [2024-12-09 23:16:22.662363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:42.231 pt1 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:42.231 "name": "raid_bdev1", 00:34:42.231 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:42.231 "strip_size_kb": 0, 00:34:42.231 "state": "configuring", 00:34:42.231 "raid_level": "raid1", 00:34:42.231 "superblock": true, 00:34:42.231 "num_base_bdevs": 3, 00:34:42.231 "num_base_bdevs_discovered": 1, 00:34:42.231 "num_base_bdevs_operational": 2, 00:34:42.231 "base_bdevs_list": [ 00:34:42.231 { 00:34:42.231 "name": null, 00:34:42.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:42.231 "is_configured": false, 00:34:42.231 "data_offset": 2048, 00:34:42.231 "data_size": 63488 00:34:42.231 }, 00:34:42.231 { 00:34:42.231 "name": "pt2", 00:34:42.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:42.231 "is_configured": true, 00:34:42.231 "data_offset": 2048, 00:34:42.231 "data_size": 63488 00:34:42.231 }, 00:34:42.231 { 00:34:42.231 "name": null, 00:34:42.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:42.231 "is_configured": false, 00:34:42.231 "data_offset": 2048, 00:34:42.231 "data_size": 63488 00:34:42.231 } 00:34:42.231 ] 00:34:42.231 }' 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:42.231 23:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.488 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:34:42.488 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.488 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:42.488 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.488 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.753 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:34:42.753 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:42.753 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.753 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.753 [2024-12-09 23:16:23.130613] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:42.753 [2024-12-09 23:16:23.130685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:42.753 [2024-12-09 23:16:23.130712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:34:42.753 [2024-12-09 23:16:23.130725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:42.753 [2024-12-09 23:16:23.131237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:42.753 [2024-12-09 23:16:23.131258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:42.754 [2024-12-09 23:16:23.131362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:42.754 [2024-12-09 23:16:23.131386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:42.754 [2024-12-09 23:16:23.131542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:34:42.754 [2024-12-09 23:16:23.131553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:42.754 [2024-12-09 23:16:23.131808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:42.754 [2024-12-09 23:16:23.131961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:34:42.754 [2024-12-09 23:16:23.131977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:34:42.754 [2024-12-09 23:16:23.132122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:42.754 pt3 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:42.754 "name": "raid_bdev1", 00:34:42.754 "uuid": "19d049b8-a393-4a4c-bd2a-fb2ce3c21a16", 00:34:42.754 "strip_size_kb": 0, 00:34:42.754 "state": "online", 00:34:42.754 "raid_level": "raid1", 00:34:42.754 "superblock": true, 00:34:42.754 "num_base_bdevs": 3, 00:34:42.754 "num_base_bdevs_discovered": 2, 00:34:42.754 "num_base_bdevs_operational": 2, 00:34:42.754 "base_bdevs_list": [ 00:34:42.754 { 00:34:42.754 "name": null, 00:34:42.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:42.754 "is_configured": false, 00:34:42.754 "data_offset": 2048, 00:34:42.754 "data_size": 63488 00:34:42.754 }, 00:34:42.754 { 00:34:42.754 "name": "pt2", 00:34:42.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:42.754 "is_configured": true, 00:34:42.754 "data_offset": 2048, 00:34:42.754 "data_size": 63488 00:34:42.754 }, 00:34:42.754 { 00:34:42.754 "name": "pt3", 00:34:42.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:42.754 "is_configured": true, 00:34:42.754 "data_offset": 2048, 00:34:42.754 "data_size": 63488 00:34:42.754 } 00:34:42.754 ] 00:34:42.754 }' 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:42.754 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:43.021 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:43.021 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:34:43.021 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.021 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:43.021 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.021 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:34:43.021 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:34:43.021 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:43.021 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.021 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:43.021 [2024-12-09 23:16:23.630566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 19d049b8-a393-4a4c-bd2a-fb2ce3c21a16 '!=' 19d049b8-a393-4a4c-bd2a-fb2ce3c21a16 ']' 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68515 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68515 ']' 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68515 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68515 00:34:43.280 killing process with pid 68515 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68515' 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68515 00:34:43.280 [2024-12-09 23:16:23.708218] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:43.280 [2024-12-09 23:16:23.708321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:43.280 23:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68515 00:34:43.280 [2024-12-09 23:16:23.708386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:43.280 [2024-12-09 23:16:23.708416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:34:43.537 [2024-12-09 23:16:24.019464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:44.938 23:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:34:44.938 00:34:44.938 real 0m7.659s 00:34:44.938 user 0m11.969s 00:34:44.938 sys 0m1.497s 00:34:44.938 23:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.938 23:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:44.938 ************************************ 00:34:44.938 END TEST raid_superblock_test 00:34:44.938 ************************************ 00:34:44.938 23:16:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:34:44.938 23:16:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:44.938 23:16:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:44.938 23:16:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:44.938 ************************************ 00:34:44.938 START TEST raid_read_error_test 00:34:44.938 ************************************ 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lZHuHDVo4R 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68962 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68962 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:34:44.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68962 ']' 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.938 23:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:44.938 [2024-12-09 23:16:25.360955] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:34:44.938 [2024-12-09 23:16:25.361082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68962 ] 00:34:44.939 [2024-12-09 23:16:25.540866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.197 [2024-12-09 23:16:25.663002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.456 [2024-12-09 23:16:25.883647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:45.456 [2024-12-09 23:16:25.883695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.715 BaseBdev1_malloc 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.715 true 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.715 [2024-12-09 23:16:26.273558] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:34:45.715 [2024-12-09 23:16:26.273618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:45.715 [2024-12-09 23:16:26.273642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:45.715 [2024-12-09 23:16:26.273657] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:45.715 [2024-12-09 23:16:26.276199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:45.715 [2024-12-09 23:16:26.276244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:45.715 BaseBdev1 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.715 BaseBdev2_malloc 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.715 true 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.715 [2024-12-09 23:16:26.329006] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:34:45.715 [2024-12-09 23:16:26.329066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:45.715 [2024-12-09 23:16:26.329086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:45.715 [2024-12-09 23:16:26.329102] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:45.715 [2024-12-09 23:16:26.331636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:45.715 [2024-12-09 23:16:26.331677] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:45.715 BaseBdev2 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.715 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.974 BaseBdev3_malloc 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.974 true 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.974 [2024-12-09 23:16:26.404127] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:34:45.974 [2024-12-09 23:16:26.404187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:45.974 [2024-12-09 23:16:26.404210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:45.974 [2024-12-09 23:16:26.404223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:45.974 [2024-12-09 23:16:26.406711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:45.974 [2024-12-09 23:16:26.406891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:45.974 BaseBdev3 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.974 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.975 [2024-12-09 23:16:26.412174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:45.975 [2024-12-09 23:16:26.414235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:45.975 [2024-12-09 23:16:26.414448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:45.975 [2024-12-09 23:16:26.414666] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:45.975 [2024-12-09 23:16:26.414681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:45.975 [2024-12-09 23:16:26.414952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:34:45.975 [2024-12-09 23:16:26.415113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:45.975 [2024-12-09 23:16:26.415126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:45.975 [2024-12-09 23:16:26.415281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:45.975 "name": "raid_bdev1", 00:34:45.975 "uuid": "3fea8b53-c428-4c1c-a20f-fc9d060e6100", 00:34:45.975 "strip_size_kb": 0, 00:34:45.975 "state": "online", 00:34:45.975 "raid_level": "raid1", 00:34:45.975 "superblock": true, 00:34:45.975 "num_base_bdevs": 3, 00:34:45.975 "num_base_bdevs_discovered": 3, 00:34:45.975 "num_base_bdevs_operational": 3, 00:34:45.975 "base_bdevs_list": [ 00:34:45.975 { 00:34:45.975 "name": "BaseBdev1", 00:34:45.975 "uuid": "a6539bbd-da47-59bf-bf4c-e4da128a1b32", 00:34:45.975 "is_configured": true, 00:34:45.975 "data_offset": 2048, 00:34:45.975 "data_size": 63488 00:34:45.975 }, 00:34:45.975 { 00:34:45.975 "name": "BaseBdev2", 00:34:45.975 "uuid": "3583291f-95c4-5930-b20d-b10662119ccc", 00:34:45.975 "is_configured": true, 00:34:45.975 "data_offset": 2048, 00:34:45.975 "data_size": 63488 00:34:45.975 }, 00:34:45.975 { 00:34:45.975 "name": "BaseBdev3", 00:34:45.975 "uuid": "6aa6d720-ff48-524e-8682-aa683a4eb1cc", 00:34:45.975 "is_configured": true, 00:34:45.975 "data_offset": 2048, 00:34:45.975 "data_size": 63488 00:34:45.975 } 00:34:45.975 ] 00:34:45.975 }' 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:45.975 23:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:46.232 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:34:46.233 23:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:46.491 [2024-12-09 23:16:26.933257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:47.427 "name": "raid_bdev1", 00:34:47.427 "uuid": "3fea8b53-c428-4c1c-a20f-fc9d060e6100", 00:34:47.427 "strip_size_kb": 0, 00:34:47.427 "state": "online", 00:34:47.427 "raid_level": "raid1", 00:34:47.427 "superblock": true, 00:34:47.427 "num_base_bdevs": 3, 00:34:47.427 "num_base_bdevs_discovered": 3, 00:34:47.427 "num_base_bdevs_operational": 3, 00:34:47.427 "base_bdevs_list": [ 00:34:47.427 { 00:34:47.427 "name": "BaseBdev1", 00:34:47.427 "uuid": "a6539bbd-da47-59bf-bf4c-e4da128a1b32", 00:34:47.427 "is_configured": true, 00:34:47.427 "data_offset": 2048, 00:34:47.427 "data_size": 63488 00:34:47.427 }, 00:34:47.427 { 00:34:47.427 "name": "BaseBdev2", 00:34:47.427 "uuid": "3583291f-95c4-5930-b20d-b10662119ccc", 00:34:47.427 "is_configured": true, 00:34:47.427 "data_offset": 2048, 00:34:47.427 "data_size": 63488 00:34:47.427 }, 00:34:47.427 { 00:34:47.427 "name": "BaseBdev3", 00:34:47.427 "uuid": "6aa6d720-ff48-524e-8682-aa683a4eb1cc", 00:34:47.427 "is_configured": true, 00:34:47.427 "data_offset": 2048, 00:34:47.427 "data_size": 63488 00:34:47.427 } 00:34:47.427 ] 00:34:47.427 }' 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:47.427 23:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.686 23:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:47.686 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.686 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.686 [2024-12-09 23:16:28.289123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:47.686 [2024-12-09 23:16:28.289156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:47.686 [2024-12-09 23:16:28.291968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:47.686 [2024-12-09 23:16:28.292018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:47.686 [2024-12-09 23:16:28.292121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:47.686 [2024-12-09 23:16:28.292133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:47.686 { 00:34:47.686 "results": [ 00:34:47.686 { 00:34:47.686 "job": "raid_bdev1", 00:34:47.686 "core_mask": "0x1", 00:34:47.686 "workload": "randrw", 00:34:47.686 "percentage": 50, 00:34:47.686 "status": "finished", 00:34:47.686 "queue_depth": 1, 00:34:47.686 "io_size": 131072, 00:34:47.686 "runtime": 1.35556, 00:34:47.686 "iops": 13241.02216058308, 00:34:47.686 "mibps": 1655.127770072885, 00:34:47.686 "io_failed": 0, 00:34:47.686 "io_timeout": 0, 00:34:47.686 "avg_latency_us": 72.64081573382505, 00:34:47.686 "min_latency_us": 24.777510040160642, 00:34:47.686 "max_latency_us": 1441.0024096385541 00:34:47.686 } 00:34:47.686 ], 00:34:47.686 "core_count": 1 00:34:47.686 } 00:34:47.686 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.686 23:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68962 00:34:47.686 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68962 ']' 00:34:47.686 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68962 00:34:47.686 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:34:47.686 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.686 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68962 00:34:47.945 killing process with pid 68962 00:34:47.945 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:47.945 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:47.945 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68962' 00:34:47.945 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68962 00:34:47.945 [2024-12-09 23:16:28.340144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:47.945 23:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68962 00:34:48.204 [2024-12-09 23:16:28.592756] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:49.584 23:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:34:49.584 23:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:34:49.584 23:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lZHuHDVo4R 00:34:49.584 ************************************ 00:34:49.584 END TEST raid_read_error_test 00:34:49.584 ************************************ 00:34:49.584 23:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:34:49.584 23:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:34:49.584 23:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:49.584 23:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:34:49.584 23:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:34:49.584 00:34:49.584 real 0m4.553s 00:34:49.584 user 0m5.355s 00:34:49.584 sys 0m0.632s 00:34:49.584 23:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.584 23:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.584 23:16:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:34:49.584 23:16:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:49.584 23:16:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.584 23:16:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:49.584 ************************************ 00:34:49.584 START TEST raid_write_error_test 00:34:49.584 ************************************ 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CBvNKWNNxR 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69107 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69107 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69107 ']' 00:34:49.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:49.584 23:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.584 [2024-12-09 23:16:29.992943] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:34:49.584 [2024-12-09 23:16:29.993060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69107 ] 00:34:49.584 [2024-12-09 23:16:30.175832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.842 [2024-12-09 23:16:30.295829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.100 [2024-12-09 23:16:30.521718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:50.100 [2024-12-09 23:16:30.521965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.360 BaseBdev1_malloc 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.360 true 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.360 [2024-12-09 23:16:30.888912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:34:50.360 [2024-12-09 23:16:30.888970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:50.360 [2024-12-09 23:16:30.888994] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:50.360 [2024-12-09 23:16:30.889008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:50.360 [2024-12-09 23:16:30.891486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:50.360 [2024-12-09 23:16:30.891528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:50.360 BaseBdev1 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.360 BaseBdev2_malloc 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.360 true 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.360 [2024-12-09 23:16:30.954936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:34:50.360 [2024-12-09 23:16:30.955138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:50.360 [2024-12-09 23:16:30.955169] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:50.360 [2024-12-09 23:16:30.955186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:50.360 [2024-12-09 23:16:30.957831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:50.360 [2024-12-09 23:16:30.957877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:50.360 BaseBdev2 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.360 23:16:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.658 BaseBdev3_malloc 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.658 true 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.658 [2024-12-09 23:16:31.032113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:34:50.658 [2024-12-09 23:16:31.032170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:50.658 [2024-12-09 23:16:31.032192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:50.658 [2024-12-09 23:16:31.032206] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:50.658 [2024-12-09 23:16:31.034664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:50.658 [2024-12-09 23:16:31.034708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:50.658 BaseBdev3 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.658 [2024-12-09 23:16:31.044183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:50.658 [2024-12-09 23:16:31.046258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:50.658 [2024-12-09 23:16:31.046519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:50.658 [2024-12-09 23:16:31.046777] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:50.658 [2024-12-09 23:16:31.046793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:50.658 [2024-12-09 23:16:31.047124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:34:50.658 [2024-12-09 23:16:31.047317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:50.658 [2024-12-09 23:16:31.047332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:50.658 [2024-12-09 23:16:31.047540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:50.658 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:50.659 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:50.659 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:50.659 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.659 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.659 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.659 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.659 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.659 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:50.659 "name": "raid_bdev1", 00:34:50.659 "uuid": "e0cf7273-f989-4c4f-a9c3-44869ad7c9c2", 00:34:50.659 "strip_size_kb": 0, 00:34:50.659 "state": "online", 00:34:50.659 "raid_level": "raid1", 00:34:50.659 "superblock": true, 00:34:50.659 "num_base_bdevs": 3, 00:34:50.659 "num_base_bdevs_discovered": 3, 00:34:50.659 "num_base_bdevs_operational": 3, 00:34:50.659 "base_bdevs_list": [ 00:34:50.659 { 00:34:50.659 "name": "BaseBdev1", 00:34:50.659 "uuid": "50955601-acc6-5d8e-bbf7-3b477a1709ae", 00:34:50.659 "is_configured": true, 00:34:50.659 "data_offset": 2048, 00:34:50.659 "data_size": 63488 00:34:50.659 }, 00:34:50.659 { 00:34:50.659 "name": "BaseBdev2", 00:34:50.659 "uuid": "69030d0e-26ad-514a-9508-520c9163f163", 00:34:50.659 "is_configured": true, 00:34:50.659 "data_offset": 2048, 00:34:50.659 "data_size": 63488 00:34:50.659 }, 00:34:50.659 { 00:34:50.659 "name": "BaseBdev3", 00:34:50.659 "uuid": "ed704e69-fba4-59b3-ba1c-ad1004d2f2e4", 00:34:50.659 "is_configured": true, 00:34:50.659 "data_offset": 2048, 00:34:50.659 "data_size": 63488 00:34:50.659 } 00:34:50.659 ] 00:34:50.659 }' 00:34:50.659 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:50.659 23:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.939 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:34:50.939 23:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:50.939 [2024-12-09 23:16:31.568879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:34:51.876 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:34:51.876 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.876 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:51.877 [2024-12-09 23:16:32.484347] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:34:51.877 [2024-12-09 23:16:32.484611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:51.877 [2024-12-09 23:16:32.484865] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.877 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.136 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.136 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:52.136 "name": "raid_bdev1", 00:34:52.136 "uuid": "e0cf7273-f989-4c4f-a9c3-44869ad7c9c2", 00:34:52.136 "strip_size_kb": 0, 00:34:52.136 "state": "online", 00:34:52.136 "raid_level": "raid1", 00:34:52.136 "superblock": true, 00:34:52.136 "num_base_bdevs": 3, 00:34:52.136 "num_base_bdevs_discovered": 2, 00:34:52.136 "num_base_bdevs_operational": 2, 00:34:52.136 "base_bdevs_list": [ 00:34:52.136 { 00:34:52.136 "name": null, 00:34:52.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.136 "is_configured": false, 00:34:52.136 "data_offset": 0, 00:34:52.136 "data_size": 63488 00:34:52.136 }, 00:34:52.136 { 00:34:52.136 "name": "BaseBdev2", 00:34:52.136 "uuid": "69030d0e-26ad-514a-9508-520c9163f163", 00:34:52.136 "is_configured": true, 00:34:52.136 "data_offset": 2048, 00:34:52.136 "data_size": 63488 00:34:52.136 }, 00:34:52.136 { 00:34:52.136 "name": "BaseBdev3", 00:34:52.136 "uuid": "ed704e69-fba4-59b3-ba1c-ad1004d2f2e4", 00:34:52.136 "is_configured": true, 00:34:52.136 "data_offset": 2048, 00:34:52.136 "data_size": 63488 00:34:52.136 } 00:34:52.136 ] 00:34:52.136 }' 00:34:52.136 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:52.136 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.396 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:52.396 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.396 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.396 [2024-12-09 23:16:32.959216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:52.396 [2024-12-09 23:16:32.959254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:52.396 [2024-12-09 23:16:32.961841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:52.396 [2024-12-09 23:16:32.962057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:52.396 [2024-12-09 23:16:32.962152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:52.396 [2024-12-09 23:16:32.962172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:52.396 { 00:34:52.396 "results": [ 00:34:52.396 { 00:34:52.396 "job": "raid_bdev1", 00:34:52.396 "core_mask": "0x1", 00:34:52.396 "workload": "randrw", 00:34:52.396 "percentage": 50, 00:34:52.396 "status": "finished", 00:34:52.396 "queue_depth": 1, 00:34:52.396 "io_size": 131072, 00:34:52.396 "runtime": 1.390566, 00:34:52.396 "iops": 14783.908135248525, 00:34:52.396 "mibps": 1847.9885169060656, 00:34:52.396 "io_failed": 0, 00:34:52.396 "io_timeout": 0, 00:34:52.396 "avg_latency_us": 64.79978776864438, 00:34:52.396 "min_latency_us": 24.674698795180724, 00:34:52.396 "max_latency_us": 1441.0024096385541 00:34:52.396 } 00:34:52.396 ], 00:34:52.396 "core_count": 1 00:34:52.396 } 00:34:52.396 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.396 23:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69107 00:34:52.396 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69107 ']' 00:34:52.396 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69107 00:34:52.396 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:34:52.396 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:52.396 23:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69107 00:34:52.396 killing process with pid 69107 00:34:52.396 23:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:52.396 23:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:52.396 23:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69107' 00:34:52.396 23:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69107 00:34:52.396 [2024-12-09 23:16:33.010800] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:52.396 23:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69107 00:34:52.655 [2024-12-09 23:16:33.244380] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:54.055 23:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:34:54.055 23:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CBvNKWNNxR 00:34:54.055 23:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:34:54.055 23:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:34:54.055 23:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:34:54.055 23:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:54.055 23:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:34:54.055 23:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:34:54.055 00:34:54.055 real 0m4.587s 00:34:54.055 user 0m5.415s 00:34:54.055 sys 0m0.607s 00:34:54.055 23:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.055 ************************************ 00:34:54.055 END TEST raid_write_error_test 00:34:54.055 ************************************ 00:34:54.056 23:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.056 23:16:34 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:34:54.056 23:16:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:34:54.056 23:16:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:34:54.056 23:16:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:54.056 23:16:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.056 23:16:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:54.056 ************************************ 00:34:54.056 START TEST raid_state_function_test 00:34:54.056 ************************************ 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:34:54.056 Process raid pid: 69251 00:34:54.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69251 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69251' 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69251 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69251 ']' 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.056 23:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.056 [2024-12-09 23:16:34.647848] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:34:54.056 [2024-12-09 23:16:34.648152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.366 [2024-12-09 23:16:34.832478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.366 [2024-12-09 23:16:34.951794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.632 [2024-12-09 23:16:35.166854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:54.632 [2024-12-09 23:16:35.167017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:54.893 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.893 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:34:54.893 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:34:54.893 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.893 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.893 [2024-12-09 23:16:35.502994] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:54.893 [2024-12-09 23:16:35.503058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:54.893 [2024-12-09 23:16:35.503070] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:54.894 [2024-12-09 23:16:35.503083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:54.894 [2024-12-09 23:16:35.503091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:54.894 [2024-12-09 23:16:35.503103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:54.894 [2024-12-09 23:16:35.503111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:54.894 [2024-12-09 23:16:35.503123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:54.894 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.152 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.152 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:55.152 "name": "Existed_Raid", 00:34:55.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.152 "strip_size_kb": 64, 00:34:55.152 "state": "configuring", 00:34:55.152 "raid_level": "raid0", 00:34:55.152 "superblock": false, 00:34:55.152 "num_base_bdevs": 4, 00:34:55.152 "num_base_bdevs_discovered": 0, 00:34:55.152 "num_base_bdevs_operational": 4, 00:34:55.152 "base_bdevs_list": [ 00:34:55.152 { 00:34:55.152 "name": "BaseBdev1", 00:34:55.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.152 "is_configured": false, 00:34:55.152 "data_offset": 0, 00:34:55.152 "data_size": 0 00:34:55.152 }, 00:34:55.152 { 00:34:55.152 "name": "BaseBdev2", 00:34:55.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.152 "is_configured": false, 00:34:55.152 "data_offset": 0, 00:34:55.152 "data_size": 0 00:34:55.152 }, 00:34:55.152 { 00:34:55.152 "name": "BaseBdev3", 00:34:55.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.152 "is_configured": false, 00:34:55.152 "data_offset": 0, 00:34:55.152 "data_size": 0 00:34:55.152 }, 00:34:55.152 { 00:34:55.152 "name": "BaseBdev4", 00:34:55.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.152 "is_configured": false, 00:34:55.152 "data_offset": 0, 00:34:55.152 "data_size": 0 00:34:55.152 } 00:34:55.152 ] 00:34:55.152 }' 00:34:55.152 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:55.152 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.411 [2024-12-09 23:16:35.942453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:55.411 [2024-12-09 23:16:35.942496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.411 [2024-12-09 23:16:35.954429] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:55.411 [2024-12-09 23:16:35.954474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:55.411 [2024-12-09 23:16:35.954484] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:55.411 [2024-12-09 23:16:35.954496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:55.411 [2024-12-09 23:16:35.954504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:55.411 [2024-12-09 23:16:35.954516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:55.411 [2024-12-09 23:16:35.954523] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:55.411 [2024-12-09 23:16:35.954535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.411 23:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.411 [2024-12-09 23:16:36.004727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:55.411 BaseBdev1 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.411 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.412 [ 00:34:55.412 { 00:34:55.412 "name": "BaseBdev1", 00:34:55.412 "aliases": [ 00:34:55.412 "69161484-ae5f-4be5-af4f-0335e4ca9f4b" 00:34:55.412 ], 00:34:55.412 "product_name": "Malloc disk", 00:34:55.412 "block_size": 512, 00:34:55.412 "num_blocks": 65536, 00:34:55.412 "uuid": "69161484-ae5f-4be5-af4f-0335e4ca9f4b", 00:34:55.412 "assigned_rate_limits": { 00:34:55.412 "rw_ios_per_sec": 0, 00:34:55.412 "rw_mbytes_per_sec": 0, 00:34:55.412 "r_mbytes_per_sec": 0, 00:34:55.412 "w_mbytes_per_sec": 0 00:34:55.412 }, 00:34:55.412 "claimed": true, 00:34:55.412 "claim_type": "exclusive_write", 00:34:55.412 "zoned": false, 00:34:55.412 "supported_io_types": { 00:34:55.412 "read": true, 00:34:55.412 "write": true, 00:34:55.412 "unmap": true, 00:34:55.412 "flush": true, 00:34:55.412 "reset": true, 00:34:55.412 "nvme_admin": false, 00:34:55.412 "nvme_io": false, 00:34:55.412 "nvme_io_md": false, 00:34:55.412 "write_zeroes": true, 00:34:55.412 "zcopy": true, 00:34:55.412 "get_zone_info": false, 00:34:55.412 "zone_management": false, 00:34:55.412 "zone_append": false, 00:34:55.412 "compare": false, 00:34:55.412 "compare_and_write": false, 00:34:55.412 "abort": true, 00:34:55.412 "seek_hole": false, 00:34:55.412 "seek_data": false, 00:34:55.412 "copy": true, 00:34:55.412 "nvme_iov_md": false 00:34:55.412 }, 00:34:55.412 "memory_domains": [ 00:34:55.412 { 00:34:55.412 "dma_device_id": "system", 00:34:55.412 "dma_device_type": 1 00:34:55.412 }, 00:34:55.412 { 00:34:55.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:55.412 "dma_device_type": 2 00:34:55.412 } 00:34:55.412 ], 00:34:55.412 "driver_specific": {} 00:34:55.412 } 00:34:55.412 ] 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.412 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:55.671 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.671 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:55.671 "name": "Existed_Raid", 00:34:55.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.671 "strip_size_kb": 64, 00:34:55.671 "state": "configuring", 00:34:55.671 "raid_level": "raid0", 00:34:55.671 "superblock": false, 00:34:55.671 "num_base_bdevs": 4, 00:34:55.671 "num_base_bdevs_discovered": 1, 00:34:55.671 "num_base_bdevs_operational": 4, 00:34:55.671 "base_bdevs_list": [ 00:34:55.671 { 00:34:55.671 "name": "BaseBdev1", 00:34:55.671 "uuid": "69161484-ae5f-4be5-af4f-0335e4ca9f4b", 00:34:55.671 "is_configured": true, 00:34:55.671 "data_offset": 0, 00:34:55.671 "data_size": 65536 00:34:55.671 }, 00:34:55.671 { 00:34:55.671 "name": "BaseBdev2", 00:34:55.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.671 "is_configured": false, 00:34:55.671 "data_offset": 0, 00:34:55.671 "data_size": 0 00:34:55.671 }, 00:34:55.671 { 00:34:55.671 "name": "BaseBdev3", 00:34:55.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.671 "is_configured": false, 00:34:55.671 "data_offset": 0, 00:34:55.671 "data_size": 0 00:34:55.671 }, 00:34:55.671 { 00:34:55.671 "name": "BaseBdev4", 00:34:55.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.671 "is_configured": false, 00:34:55.671 "data_offset": 0, 00:34:55.671 "data_size": 0 00:34:55.671 } 00:34:55.671 ] 00:34:55.671 }' 00:34:55.671 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:55.671 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.931 [2024-12-09 23:16:36.440242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:55.931 [2024-12-09 23:16:36.440449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.931 [2024-12-09 23:16:36.452284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:55.931 [2024-12-09 23:16:36.454516] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:55.931 [2024-12-09 23:16:36.454564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:55.931 [2024-12-09 23:16:36.454576] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:55.931 [2024-12-09 23:16:36.454592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:55.931 [2024-12-09 23:16:36.454601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:55.931 [2024-12-09 23:16:36.454613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:55.931 "name": "Existed_Raid", 00:34:55.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.931 "strip_size_kb": 64, 00:34:55.931 "state": "configuring", 00:34:55.931 "raid_level": "raid0", 00:34:55.931 "superblock": false, 00:34:55.931 "num_base_bdevs": 4, 00:34:55.931 "num_base_bdevs_discovered": 1, 00:34:55.931 "num_base_bdevs_operational": 4, 00:34:55.931 "base_bdevs_list": [ 00:34:55.931 { 00:34:55.931 "name": "BaseBdev1", 00:34:55.931 "uuid": "69161484-ae5f-4be5-af4f-0335e4ca9f4b", 00:34:55.931 "is_configured": true, 00:34:55.931 "data_offset": 0, 00:34:55.931 "data_size": 65536 00:34:55.931 }, 00:34:55.931 { 00:34:55.931 "name": "BaseBdev2", 00:34:55.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.931 "is_configured": false, 00:34:55.931 "data_offset": 0, 00:34:55.931 "data_size": 0 00:34:55.931 }, 00:34:55.931 { 00:34:55.931 "name": "BaseBdev3", 00:34:55.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.931 "is_configured": false, 00:34:55.931 "data_offset": 0, 00:34:55.931 "data_size": 0 00:34:55.931 }, 00:34:55.931 { 00:34:55.931 "name": "BaseBdev4", 00:34:55.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.931 "is_configured": false, 00:34:55.931 "data_offset": 0, 00:34:55.931 "data_size": 0 00:34:55.931 } 00:34:55.931 ] 00:34:55.931 }' 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:55.931 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.502 [2024-12-09 23:16:36.903851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:56.502 BaseBdev2 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.502 [ 00:34:56.502 { 00:34:56.502 "name": "BaseBdev2", 00:34:56.502 "aliases": [ 00:34:56.502 "9a662b1c-8317-4597-ba79-22bb77a90a58" 00:34:56.502 ], 00:34:56.502 "product_name": "Malloc disk", 00:34:56.502 "block_size": 512, 00:34:56.502 "num_blocks": 65536, 00:34:56.502 "uuid": "9a662b1c-8317-4597-ba79-22bb77a90a58", 00:34:56.502 "assigned_rate_limits": { 00:34:56.502 "rw_ios_per_sec": 0, 00:34:56.502 "rw_mbytes_per_sec": 0, 00:34:56.502 "r_mbytes_per_sec": 0, 00:34:56.502 "w_mbytes_per_sec": 0 00:34:56.502 }, 00:34:56.502 "claimed": true, 00:34:56.502 "claim_type": "exclusive_write", 00:34:56.502 "zoned": false, 00:34:56.502 "supported_io_types": { 00:34:56.502 "read": true, 00:34:56.502 "write": true, 00:34:56.502 "unmap": true, 00:34:56.502 "flush": true, 00:34:56.502 "reset": true, 00:34:56.502 "nvme_admin": false, 00:34:56.502 "nvme_io": false, 00:34:56.502 "nvme_io_md": false, 00:34:56.502 "write_zeroes": true, 00:34:56.502 "zcopy": true, 00:34:56.502 "get_zone_info": false, 00:34:56.502 "zone_management": false, 00:34:56.502 "zone_append": false, 00:34:56.502 "compare": false, 00:34:56.502 "compare_and_write": false, 00:34:56.502 "abort": true, 00:34:56.502 "seek_hole": false, 00:34:56.502 "seek_data": false, 00:34:56.502 "copy": true, 00:34:56.502 "nvme_iov_md": false 00:34:56.502 }, 00:34:56.502 "memory_domains": [ 00:34:56.502 { 00:34:56.502 "dma_device_id": "system", 00:34:56.502 "dma_device_type": 1 00:34:56.502 }, 00:34:56.502 { 00:34:56.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:56.502 "dma_device_type": 2 00:34:56.502 } 00:34:56.502 ], 00:34:56.502 "driver_specific": {} 00:34:56.502 } 00:34:56.502 ] 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:56.502 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:56.503 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:56.503 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:56.503 23:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:56.503 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.503 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.503 23:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.503 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:56.503 "name": "Existed_Raid", 00:34:56.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.503 "strip_size_kb": 64, 00:34:56.503 "state": "configuring", 00:34:56.503 "raid_level": "raid0", 00:34:56.503 "superblock": false, 00:34:56.503 "num_base_bdevs": 4, 00:34:56.503 "num_base_bdevs_discovered": 2, 00:34:56.503 "num_base_bdevs_operational": 4, 00:34:56.503 "base_bdevs_list": [ 00:34:56.503 { 00:34:56.503 "name": "BaseBdev1", 00:34:56.503 "uuid": "69161484-ae5f-4be5-af4f-0335e4ca9f4b", 00:34:56.503 "is_configured": true, 00:34:56.503 "data_offset": 0, 00:34:56.503 "data_size": 65536 00:34:56.503 }, 00:34:56.503 { 00:34:56.503 "name": "BaseBdev2", 00:34:56.503 "uuid": "9a662b1c-8317-4597-ba79-22bb77a90a58", 00:34:56.503 "is_configured": true, 00:34:56.503 "data_offset": 0, 00:34:56.503 "data_size": 65536 00:34:56.503 }, 00:34:56.503 { 00:34:56.503 "name": "BaseBdev3", 00:34:56.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.503 "is_configured": false, 00:34:56.503 "data_offset": 0, 00:34:56.503 "data_size": 0 00:34:56.503 }, 00:34:56.503 { 00:34:56.503 "name": "BaseBdev4", 00:34:56.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.503 "is_configured": false, 00:34:56.503 "data_offset": 0, 00:34:56.503 "data_size": 0 00:34:56.503 } 00:34:56.503 ] 00:34:56.503 }' 00:34:56.503 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:56.503 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.761 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:56.761 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.761 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.020 [2024-12-09 23:16:37.413326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:57.020 BaseBdev3 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.020 [ 00:34:57.020 { 00:34:57.020 "name": "BaseBdev3", 00:34:57.020 "aliases": [ 00:34:57.020 "97ac573a-3388-450d-8f2d-33a2c874364b" 00:34:57.020 ], 00:34:57.020 "product_name": "Malloc disk", 00:34:57.020 "block_size": 512, 00:34:57.020 "num_blocks": 65536, 00:34:57.020 "uuid": "97ac573a-3388-450d-8f2d-33a2c874364b", 00:34:57.020 "assigned_rate_limits": { 00:34:57.020 "rw_ios_per_sec": 0, 00:34:57.020 "rw_mbytes_per_sec": 0, 00:34:57.020 "r_mbytes_per_sec": 0, 00:34:57.020 "w_mbytes_per_sec": 0 00:34:57.020 }, 00:34:57.020 "claimed": true, 00:34:57.020 "claim_type": "exclusive_write", 00:34:57.020 "zoned": false, 00:34:57.020 "supported_io_types": { 00:34:57.020 "read": true, 00:34:57.020 "write": true, 00:34:57.020 "unmap": true, 00:34:57.020 "flush": true, 00:34:57.020 "reset": true, 00:34:57.020 "nvme_admin": false, 00:34:57.020 "nvme_io": false, 00:34:57.020 "nvme_io_md": false, 00:34:57.020 "write_zeroes": true, 00:34:57.020 "zcopy": true, 00:34:57.020 "get_zone_info": false, 00:34:57.020 "zone_management": false, 00:34:57.020 "zone_append": false, 00:34:57.020 "compare": false, 00:34:57.020 "compare_and_write": false, 00:34:57.020 "abort": true, 00:34:57.020 "seek_hole": false, 00:34:57.020 "seek_data": false, 00:34:57.020 "copy": true, 00:34:57.020 "nvme_iov_md": false 00:34:57.020 }, 00:34:57.020 "memory_domains": [ 00:34:57.020 { 00:34:57.020 "dma_device_id": "system", 00:34:57.020 "dma_device_type": 1 00:34:57.020 }, 00:34:57.020 { 00:34:57.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:57.020 "dma_device_type": 2 00:34:57.020 } 00:34:57.020 ], 00:34:57.020 "driver_specific": {} 00:34:57.020 } 00:34:57.020 ] 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:57.020 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.021 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.021 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:57.021 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.021 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:57.021 "name": "Existed_Raid", 00:34:57.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.021 "strip_size_kb": 64, 00:34:57.021 "state": "configuring", 00:34:57.021 "raid_level": "raid0", 00:34:57.021 "superblock": false, 00:34:57.021 "num_base_bdevs": 4, 00:34:57.021 "num_base_bdevs_discovered": 3, 00:34:57.021 "num_base_bdevs_operational": 4, 00:34:57.021 "base_bdevs_list": [ 00:34:57.021 { 00:34:57.021 "name": "BaseBdev1", 00:34:57.021 "uuid": "69161484-ae5f-4be5-af4f-0335e4ca9f4b", 00:34:57.021 "is_configured": true, 00:34:57.021 "data_offset": 0, 00:34:57.021 "data_size": 65536 00:34:57.021 }, 00:34:57.021 { 00:34:57.021 "name": "BaseBdev2", 00:34:57.021 "uuid": "9a662b1c-8317-4597-ba79-22bb77a90a58", 00:34:57.021 "is_configured": true, 00:34:57.021 "data_offset": 0, 00:34:57.021 "data_size": 65536 00:34:57.021 }, 00:34:57.021 { 00:34:57.021 "name": "BaseBdev3", 00:34:57.021 "uuid": "97ac573a-3388-450d-8f2d-33a2c874364b", 00:34:57.021 "is_configured": true, 00:34:57.021 "data_offset": 0, 00:34:57.021 "data_size": 65536 00:34:57.021 }, 00:34:57.021 { 00:34:57.021 "name": "BaseBdev4", 00:34:57.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.021 "is_configured": false, 00:34:57.021 "data_offset": 0, 00:34:57.021 "data_size": 0 00:34:57.021 } 00:34:57.021 ] 00:34:57.021 }' 00:34:57.021 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:57.021 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.279 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:34:57.279 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.279 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.538 [2024-12-09 23:16:37.929759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:57.538 [2024-12-09 23:16:37.929816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:57.538 [2024-12-09 23:16:37.929827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:34:57.538 [2024-12-09 23:16:37.930120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:57.538 [2024-12-09 23:16:37.930299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:57.538 [2024-12-09 23:16:37.930313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:57.538 [2024-12-09 23:16:37.930599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:57.538 BaseBdev4 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.538 [ 00:34:57.538 { 00:34:57.538 "name": "BaseBdev4", 00:34:57.538 "aliases": [ 00:34:57.538 "4b614c57-e782-4be2-b2ea-b3fdb57907ff" 00:34:57.538 ], 00:34:57.538 "product_name": "Malloc disk", 00:34:57.538 "block_size": 512, 00:34:57.538 "num_blocks": 65536, 00:34:57.538 "uuid": "4b614c57-e782-4be2-b2ea-b3fdb57907ff", 00:34:57.538 "assigned_rate_limits": { 00:34:57.538 "rw_ios_per_sec": 0, 00:34:57.538 "rw_mbytes_per_sec": 0, 00:34:57.538 "r_mbytes_per_sec": 0, 00:34:57.538 "w_mbytes_per_sec": 0 00:34:57.538 }, 00:34:57.538 "claimed": true, 00:34:57.538 "claim_type": "exclusive_write", 00:34:57.538 "zoned": false, 00:34:57.538 "supported_io_types": { 00:34:57.538 "read": true, 00:34:57.538 "write": true, 00:34:57.538 "unmap": true, 00:34:57.538 "flush": true, 00:34:57.538 "reset": true, 00:34:57.538 "nvme_admin": false, 00:34:57.538 "nvme_io": false, 00:34:57.538 "nvme_io_md": false, 00:34:57.538 "write_zeroes": true, 00:34:57.538 "zcopy": true, 00:34:57.538 "get_zone_info": false, 00:34:57.538 "zone_management": false, 00:34:57.538 "zone_append": false, 00:34:57.538 "compare": false, 00:34:57.538 "compare_and_write": false, 00:34:57.538 "abort": true, 00:34:57.538 "seek_hole": false, 00:34:57.538 "seek_data": false, 00:34:57.538 "copy": true, 00:34:57.538 "nvme_iov_md": false 00:34:57.538 }, 00:34:57.538 "memory_domains": [ 00:34:57.538 { 00:34:57.538 "dma_device_id": "system", 00:34:57.538 "dma_device_type": 1 00:34:57.538 }, 00:34:57.538 { 00:34:57.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:57.538 "dma_device_type": 2 00:34:57.538 } 00:34:57.538 ], 00:34:57.538 "driver_specific": {} 00:34:57.538 } 00:34:57.538 ] 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.538 23:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:57.538 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.538 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:57.538 "name": "Existed_Raid", 00:34:57.538 "uuid": "b17d7e66-458b-4dfb-82e5-70a198f7f954", 00:34:57.538 "strip_size_kb": 64, 00:34:57.538 "state": "online", 00:34:57.538 "raid_level": "raid0", 00:34:57.538 "superblock": false, 00:34:57.538 "num_base_bdevs": 4, 00:34:57.538 "num_base_bdevs_discovered": 4, 00:34:57.538 "num_base_bdevs_operational": 4, 00:34:57.538 "base_bdevs_list": [ 00:34:57.538 { 00:34:57.538 "name": "BaseBdev1", 00:34:57.538 "uuid": "69161484-ae5f-4be5-af4f-0335e4ca9f4b", 00:34:57.538 "is_configured": true, 00:34:57.538 "data_offset": 0, 00:34:57.538 "data_size": 65536 00:34:57.538 }, 00:34:57.538 { 00:34:57.538 "name": "BaseBdev2", 00:34:57.538 "uuid": "9a662b1c-8317-4597-ba79-22bb77a90a58", 00:34:57.538 "is_configured": true, 00:34:57.538 "data_offset": 0, 00:34:57.538 "data_size": 65536 00:34:57.538 }, 00:34:57.538 { 00:34:57.538 "name": "BaseBdev3", 00:34:57.538 "uuid": "97ac573a-3388-450d-8f2d-33a2c874364b", 00:34:57.538 "is_configured": true, 00:34:57.538 "data_offset": 0, 00:34:57.539 "data_size": 65536 00:34:57.539 }, 00:34:57.539 { 00:34:57.539 "name": "BaseBdev4", 00:34:57.539 "uuid": "4b614c57-e782-4be2-b2ea-b3fdb57907ff", 00:34:57.539 "is_configured": true, 00:34:57.539 "data_offset": 0, 00:34:57.539 "data_size": 65536 00:34:57.539 } 00:34:57.539 ] 00:34:57.539 }' 00:34:57.539 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:57.539 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.796 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:57.796 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:57.796 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:57.796 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:57.796 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:57.796 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:57.796 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:57.796 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:57.796 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.796 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.796 [2024-12-09 23:16:38.421514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:58.055 "name": "Existed_Raid", 00:34:58.055 "aliases": [ 00:34:58.055 "b17d7e66-458b-4dfb-82e5-70a198f7f954" 00:34:58.055 ], 00:34:58.055 "product_name": "Raid Volume", 00:34:58.055 "block_size": 512, 00:34:58.055 "num_blocks": 262144, 00:34:58.055 "uuid": "b17d7e66-458b-4dfb-82e5-70a198f7f954", 00:34:58.055 "assigned_rate_limits": { 00:34:58.055 "rw_ios_per_sec": 0, 00:34:58.055 "rw_mbytes_per_sec": 0, 00:34:58.055 "r_mbytes_per_sec": 0, 00:34:58.055 "w_mbytes_per_sec": 0 00:34:58.055 }, 00:34:58.055 "claimed": false, 00:34:58.055 "zoned": false, 00:34:58.055 "supported_io_types": { 00:34:58.055 "read": true, 00:34:58.055 "write": true, 00:34:58.055 "unmap": true, 00:34:58.055 "flush": true, 00:34:58.055 "reset": true, 00:34:58.055 "nvme_admin": false, 00:34:58.055 "nvme_io": false, 00:34:58.055 "nvme_io_md": false, 00:34:58.055 "write_zeroes": true, 00:34:58.055 "zcopy": false, 00:34:58.055 "get_zone_info": false, 00:34:58.055 "zone_management": false, 00:34:58.055 "zone_append": false, 00:34:58.055 "compare": false, 00:34:58.055 "compare_and_write": false, 00:34:58.055 "abort": false, 00:34:58.055 "seek_hole": false, 00:34:58.055 "seek_data": false, 00:34:58.055 "copy": false, 00:34:58.055 "nvme_iov_md": false 00:34:58.055 }, 00:34:58.055 "memory_domains": [ 00:34:58.055 { 00:34:58.055 "dma_device_id": "system", 00:34:58.055 "dma_device_type": 1 00:34:58.055 }, 00:34:58.055 { 00:34:58.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:58.055 "dma_device_type": 2 00:34:58.055 }, 00:34:58.055 { 00:34:58.055 "dma_device_id": "system", 00:34:58.055 "dma_device_type": 1 00:34:58.055 }, 00:34:58.055 { 00:34:58.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:58.055 "dma_device_type": 2 00:34:58.055 }, 00:34:58.055 { 00:34:58.055 "dma_device_id": "system", 00:34:58.055 "dma_device_type": 1 00:34:58.055 }, 00:34:58.055 { 00:34:58.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:58.055 "dma_device_type": 2 00:34:58.055 }, 00:34:58.055 { 00:34:58.055 "dma_device_id": "system", 00:34:58.055 "dma_device_type": 1 00:34:58.055 }, 00:34:58.055 { 00:34:58.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:58.055 "dma_device_type": 2 00:34:58.055 } 00:34:58.055 ], 00:34:58.055 "driver_specific": { 00:34:58.055 "raid": { 00:34:58.055 "uuid": "b17d7e66-458b-4dfb-82e5-70a198f7f954", 00:34:58.055 "strip_size_kb": 64, 00:34:58.055 "state": "online", 00:34:58.055 "raid_level": "raid0", 00:34:58.055 "superblock": false, 00:34:58.055 "num_base_bdevs": 4, 00:34:58.055 "num_base_bdevs_discovered": 4, 00:34:58.055 "num_base_bdevs_operational": 4, 00:34:58.055 "base_bdevs_list": [ 00:34:58.055 { 00:34:58.055 "name": "BaseBdev1", 00:34:58.055 "uuid": "69161484-ae5f-4be5-af4f-0335e4ca9f4b", 00:34:58.055 "is_configured": true, 00:34:58.055 "data_offset": 0, 00:34:58.055 "data_size": 65536 00:34:58.055 }, 00:34:58.055 { 00:34:58.055 "name": "BaseBdev2", 00:34:58.055 "uuid": "9a662b1c-8317-4597-ba79-22bb77a90a58", 00:34:58.055 "is_configured": true, 00:34:58.055 "data_offset": 0, 00:34:58.055 "data_size": 65536 00:34:58.055 }, 00:34:58.055 { 00:34:58.055 "name": "BaseBdev3", 00:34:58.055 "uuid": "97ac573a-3388-450d-8f2d-33a2c874364b", 00:34:58.055 "is_configured": true, 00:34:58.055 "data_offset": 0, 00:34:58.055 "data_size": 65536 00:34:58.055 }, 00:34:58.055 { 00:34:58.055 "name": "BaseBdev4", 00:34:58.055 "uuid": "4b614c57-e782-4be2-b2ea-b3fdb57907ff", 00:34:58.055 "is_configured": true, 00:34:58.055 "data_offset": 0, 00:34:58.055 "data_size": 65536 00:34:58.055 } 00:34:58.055 ] 00:34:58.055 } 00:34:58.055 } 00:34:58.055 }' 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:58.055 BaseBdev2 00:34:58.055 BaseBdev3 00:34:58.055 BaseBdev4' 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:58.055 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.315 [2024-12-09 23:16:38.760736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:58.315 [2024-12-09 23:16:38.760770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:58.315 [2024-12-09 23:16:38.760821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:58.315 "name": "Existed_Raid", 00:34:58.315 "uuid": "b17d7e66-458b-4dfb-82e5-70a198f7f954", 00:34:58.315 "strip_size_kb": 64, 00:34:58.315 "state": "offline", 00:34:58.315 "raid_level": "raid0", 00:34:58.315 "superblock": false, 00:34:58.315 "num_base_bdevs": 4, 00:34:58.315 "num_base_bdevs_discovered": 3, 00:34:58.315 "num_base_bdevs_operational": 3, 00:34:58.315 "base_bdevs_list": [ 00:34:58.315 { 00:34:58.315 "name": null, 00:34:58.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.315 "is_configured": false, 00:34:58.315 "data_offset": 0, 00:34:58.315 "data_size": 65536 00:34:58.315 }, 00:34:58.315 { 00:34:58.315 "name": "BaseBdev2", 00:34:58.315 "uuid": "9a662b1c-8317-4597-ba79-22bb77a90a58", 00:34:58.315 "is_configured": true, 00:34:58.315 "data_offset": 0, 00:34:58.315 "data_size": 65536 00:34:58.315 }, 00:34:58.315 { 00:34:58.315 "name": "BaseBdev3", 00:34:58.315 "uuid": "97ac573a-3388-450d-8f2d-33a2c874364b", 00:34:58.315 "is_configured": true, 00:34:58.315 "data_offset": 0, 00:34:58.315 "data_size": 65536 00:34:58.315 }, 00:34:58.315 { 00:34:58.315 "name": "BaseBdev4", 00:34:58.315 "uuid": "4b614c57-e782-4be2-b2ea-b3fdb57907ff", 00:34:58.315 "is_configured": true, 00:34:58.315 "data_offset": 0, 00:34:58.315 "data_size": 65536 00:34:58.315 } 00:34:58.315 ] 00:34:58.315 }' 00:34:58.315 23:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:58.316 23:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.888 [2024-12-09 23:16:39.320914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.888 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.888 [2024-12-09 23:16:39.463137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.147 [2024-12-09 23:16:39.613648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:34:59.147 [2024-12-09 23:16:39.613698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.147 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.406 BaseBdev2 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.407 [ 00:34:59.407 { 00:34:59.407 "name": "BaseBdev2", 00:34:59.407 "aliases": [ 00:34:59.407 "af0ec7e6-917f-45ba-bca2-8245f8aadc5e" 00:34:59.407 ], 00:34:59.407 "product_name": "Malloc disk", 00:34:59.407 "block_size": 512, 00:34:59.407 "num_blocks": 65536, 00:34:59.407 "uuid": "af0ec7e6-917f-45ba-bca2-8245f8aadc5e", 00:34:59.407 "assigned_rate_limits": { 00:34:59.407 "rw_ios_per_sec": 0, 00:34:59.407 "rw_mbytes_per_sec": 0, 00:34:59.407 "r_mbytes_per_sec": 0, 00:34:59.407 "w_mbytes_per_sec": 0 00:34:59.407 }, 00:34:59.407 "claimed": false, 00:34:59.407 "zoned": false, 00:34:59.407 "supported_io_types": { 00:34:59.407 "read": true, 00:34:59.407 "write": true, 00:34:59.407 "unmap": true, 00:34:59.407 "flush": true, 00:34:59.407 "reset": true, 00:34:59.407 "nvme_admin": false, 00:34:59.407 "nvme_io": false, 00:34:59.407 "nvme_io_md": false, 00:34:59.407 "write_zeroes": true, 00:34:59.407 "zcopy": true, 00:34:59.407 "get_zone_info": false, 00:34:59.407 "zone_management": false, 00:34:59.407 "zone_append": false, 00:34:59.407 "compare": false, 00:34:59.407 "compare_and_write": false, 00:34:59.407 "abort": true, 00:34:59.407 "seek_hole": false, 00:34:59.407 "seek_data": false, 00:34:59.407 "copy": true, 00:34:59.407 "nvme_iov_md": false 00:34:59.407 }, 00:34:59.407 "memory_domains": [ 00:34:59.407 { 00:34:59.407 "dma_device_id": "system", 00:34:59.407 "dma_device_type": 1 00:34:59.407 }, 00:34:59.407 { 00:34:59.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.407 "dma_device_type": 2 00:34:59.407 } 00:34:59.407 ], 00:34:59.407 "driver_specific": {} 00:34:59.407 } 00:34:59.407 ] 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.407 BaseBdev3 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.407 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.407 [ 00:34:59.407 { 00:34:59.407 "name": "BaseBdev3", 00:34:59.407 "aliases": [ 00:34:59.407 "3c1fba5f-3203-4760-8dc3-1bf0b94372ba" 00:34:59.407 ], 00:34:59.407 "product_name": "Malloc disk", 00:34:59.407 "block_size": 512, 00:34:59.407 "num_blocks": 65536, 00:34:59.407 "uuid": "3c1fba5f-3203-4760-8dc3-1bf0b94372ba", 00:34:59.407 "assigned_rate_limits": { 00:34:59.407 "rw_ios_per_sec": 0, 00:34:59.407 "rw_mbytes_per_sec": 0, 00:34:59.407 "r_mbytes_per_sec": 0, 00:34:59.407 "w_mbytes_per_sec": 0 00:34:59.407 }, 00:34:59.407 "claimed": false, 00:34:59.407 "zoned": false, 00:34:59.407 "supported_io_types": { 00:34:59.407 "read": true, 00:34:59.407 "write": true, 00:34:59.407 "unmap": true, 00:34:59.407 "flush": true, 00:34:59.407 "reset": true, 00:34:59.407 "nvme_admin": false, 00:34:59.407 "nvme_io": false, 00:34:59.407 "nvme_io_md": false, 00:34:59.407 "write_zeroes": true, 00:34:59.407 "zcopy": true, 00:34:59.407 "get_zone_info": false, 00:34:59.407 "zone_management": false, 00:34:59.407 "zone_append": false, 00:34:59.407 "compare": false, 00:34:59.407 "compare_and_write": false, 00:34:59.407 "abort": true, 00:34:59.407 "seek_hole": false, 00:34:59.407 "seek_data": false, 00:34:59.407 "copy": true, 00:34:59.407 "nvme_iov_md": false 00:34:59.407 }, 00:34:59.407 "memory_domains": [ 00:34:59.407 { 00:34:59.407 "dma_device_id": "system", 00:34:59.407 "dma_device_type": 1 00:34:59.407 }, 00:34:59.407 { 00:34:59.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.407 "dma_device_type": 2 00:34:59.407 } 00:34:59.408 ], 00:34:59.408 "driver_specific": {} 00:34:59.408 } 00:34:59.408 ] 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.408 BaseBdev4 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.408 23:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.408 [ 00:34:59.408 { 00:34:59.408 "name": "BaseBdev4", 00:34:59.408 "aliases": [ 00:34:59.408 "82272ca1-16b7-4d39-ba80-7468af7c141b" 00:34:59.408 ], 00:34:59.408 "product_name": "Malloc disk", 00:34:59.408 "block_size": 512, 00:34:59.408 "num_blocks": 65536, 00:34:59.408 "uuid": "82272ca1-16b7-4d39-ba80-7468af7c141b", 00:34:59.408 "assigned_rate_limits": { 00:34:59.408 "rw_ios_per_sec": 0, 00:34:59.408 "rw_mbytes_per_sec": 0, 00:34:59.408 "r_mbytes_per_sec": 0, 00:34:59.408 "w_mbytes_per_sec": 0 00:34:59.408 }, 00:34:59.408 "claimed": false, 00:34:59.408 "zoned": false, 00:34:59.408 "supported_io_types": { 00:34:59.408 "read": true, 00:34:59.408 "write": true, 00:34:59.408 "unmap": true, 00:34:59.408 "flush": true, 00:34:59.408 "reset": true, 00:34:59.408 "nvme_admin": false, 00:34:59.408 "nvme_io": false, 00:34:59.408 "nvme_io_md": false, 00:34:59.408 "write_zeroes": true, 00:34:59.408 "zcopy": true, 00:34:59.408 "get_zone_info": false, 00:34:59.408 "zone_management": false, 00:34:59.408 "zone_append": false, 00:34:59.408 "compare": false, 00:34:59.408 "compare_and_write": false, 00:34:59.408 "abort": true, 00:34:59.408 "seek_hole": false, 00:34:59.408 "seek_data": false, 00:34:59.408 "copy": true, 00:34:59.408 "nvme_iov_md": false 00:34:59.408 }, 00:34:59.408 "memory_domains": [ 00:34:59.408 { 00:34:59.408 "dma_device_id": "system", 00:34:59.408 "dma_device_type": 1 00:34:59.408 }, 00:34:59.408 { 00:34:59.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.408 "dma_device_type": 2 00:34:59.408 } 00:34:59.408 ], 00:34:59.408 "driver_specific": {} 00:34:59.408 } 00:34:59.408 ] 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.408 [2024-12-09 23:16:40.031143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:59.408 [2024-12-09 23:16:40.031337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:59.408 [2024-12-09 23:16:40.031467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:59.408 [2024-12-09 23:16:40.033909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:59.408 [2024-12-09 23:16:40.034085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:59.408 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:59.667 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.667 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.667 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:59.667 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.667 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.667 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:59.667 "name": "Existed_Raid", 00:34:59.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.667 "strip_size_kb": 64, 00:34:59.667 "state": "configuring", 00:34:59.667 "raid_level": "raid0", 00:34:59.667 "superblock": false, 00:34:59.667 "num_base_bdevs": 4, 00:34:59.667 "num_base_bdevs_discovered": 3, 00:34:59.667 "num_base_bdevs_operational": 4, 00:34:59.667 "base_bdevs_list": [ 00:34:59.667 { 00:34:59.667 "name": "BaseBdev1", 00:34:59.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.667 "is_configured": false, 00:34:59.667 "data_offset": 0, 00:34:59.667 "data_size": 0 00:34:59.667 }, 00:34:59.667 { 00:34:59.667 "name": "BaseBdev2", 00:34:59.667 "uuid": "af0ec7e6-917f-45ba-bca2-8245f8aadc5e", 00:34:59.667 "is_configured": true, 00:34:59.667 "data_offset": 0, 00:34:59.667 "data_size": 65536 00:34:59.667 }, 00:34:59.667 { 00:34:59.667 "name": "BaseBdev3", 00:34:59.667 "uuid": "3c1fba5f-3203-4760-8dc3-1bf0b94372ba", 00:34:59.667 "is_configured": true, 00:34:59.667 "data_offset": 0, 00:34:59.667 "data_size": 65536 00:34:59.667 }, 00:34:59.667 { 00:34:59.667 "name": "BaseBdev4", 00:34:59.667 "uuid": "82272ca1-16b7-4d39-ba80-7468af7c141b", 00:34:59.667 "is_configured": true, 00:34:59.667 "data_offset": 0, 00:34:59.667 "data_size": 65536 00:34:59.667 } 00:34:59.667 ] 00:34:59.667 }' 00:34:59.667 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:59.667 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.925 [2024-12-09 23:16:40.470547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:59.925 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:59.926 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:59.926 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.926 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.926 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.926 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.926 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:59.926 "name": "Existed_Raid", 00:34:59.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.926 "strip_size_kb": 64, 00:34:59.926 "state": "configuring", 00:34:59.926 "raid_level": "raid0", 00:34:59.926 "superblock": false, 00:34:59.926 "num_base_bdevs": 4, 00:34:59.926 "num_base_bdevs_discovered": 2, 00:34:59.926 "num_base_bdevs_operational": 4, 00:34:59.926 "base_bdevs_list": [ 00:34:59.926 { 00:34:59.926 "name": "BaseBdev1", 00:34:59.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.926 "is_configured": false, 00:34:59.926 "data_offset": 0, 00:34:59.926 "data_size": 0 00:34:59.926 }, 00:34:59.926 { 00:34:59.926 "name": null, 00:34:59.926 "uuid": "af0ec7e6-917f-45ba-bca2-8245f8aadc5e", 00:34:59.926 "is_configured": false, 00:34:59.926 "data_offset": 0, 00:34:59.926 "data_size": 65536 00:34:59.926 }, 00:34:59.926 { 00:34:59.926 "name": "BaseBdev3", 00:34:59.926 "uuid": "3c1fba5f-3203-4760-8dc3-1bf0b94372ba", 00:34:59.926 "is_configured": true, 00:34:59.926 "data_offset": 0, 00:34:59.926 "data_size": 65536 00:34:59.926 }, 00:34:59.926 { 00:34:59.926 "name": "BaseBdev4", 00:34:59.926 "uuid": "82272ca1-16b7-4d39-ba80-7468af7c141b", 00:34:59.926 "is_configured": true, 00:34:59.926 "data_offset": 0, 00:34:59.926 "data_size": 65536 00:34:59.926 } 00:34:59.926 ] 00:34:59.926 }' 00:34:59.926 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:59.926 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.497 [2024-12-09 23:16:40.985793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:00.497 BaseBdev1 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.497 23:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.497 [ 00:35:00.497 { 00:35:00.497 "name": "BaseBdev1", 00:35:00.497 "aliases": [ 00:35:00.497 "f2e7a215-57cc-4cca-a77a-2caff10a4165" 00:35:00.497 ], 00:35:00.497 "product_name": "Malloc disk", 00:35:00.497 "block_size": 512, 00:35:00.497 "num_blocks": 65536, 00:35:00.497 "uuid": "f2e7a215-57cc-4cca-a77a-2caff10a4165", 00:35:00.497 "assigned_rate_limits": { 00:35:00.497 "rw_ios_per_sec": 0, 00:35:00.497 "rw_mbytes_per_sec": 0, 00:35:00.497 "r_mbytes_per_sec": 0, 00:35:00.497 "w_mbytes_per_sec": 0 00:35:00.497 }, 00:35:00.497 "claimed": true, 00:35:00.497 "claim_type": "exclusive_write", 00:35:00.497 "zoned": false, 00:35:00.497 "supported_io_types": { 00:35:00.497 "read": true, 00:35:00.497 "write": true, 00:35:00.497 "unmap": true, 00:35:00.497 "flush": true, 00:35:00.497 "reset": true, 00:35:00.497 "nvme_admin": false, 00:35:00.497 "nvme_io": false, 00:35:00.497 "nvme_io_md": false, 00:35:00.497 "write_zeroes": true, 00:35:00.497 "zcopy": true, 00:35:00.497 "get_zone_info": false, 00:35:00.497 "zone_management": false, 00:35:00.497 "zone_append": false, 00:35:00.497 "compare": false, 00:35:00.497 "compare_and_write": false, 00:35:00.497 "abort": true, 00:35:00.497 "seek_hole": false, 00:35:00.497 "seek_data": false, 00:35:00.497 "copy": true, 00:35:00.497 "nvme_iov_md": false 00:35:00.497 }, 00:35:00.497 "memory_domains": [ 00:35:00.497 { 00:35:00.497 "dma_device_id": "system", 00:35:00.497 "dma_device_type": 1 00:35:00.497 }, 00:35:00.497 { 00:35:00.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.497 "dma_device_type": 2 00:35:00.497 } 00:35:00.497 ], 00:35:00.497 "driver_specific": {} 00:35:00.497 } 00:35:00.497 ] 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:00.497 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:00.498 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.498 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.498 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.498 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.498 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:00.498 "name": "Existed_Raid", 00:35:00.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:00.498 "strip_size_kb": 64, 00:35:00.498 "state": "configuring", 00:35:00.498 "raid_level": "raid0", 00:35:00.498 "superblock": false, 00:35:00.498 "num_base_bdevs": 4, 00:35:00.498 "num_base_bdevs_discovered": 3, 00:35:00.498 "num_base_bdevs_operational": 4, 00:35:00.498 "base_bdevs_list": [ 00:35:00.498 { 00:35:00.498 "name": "BaseBdev1", 00:35:00.498 "uuid": "f2e7a215-57cc-4cca-a77a-2caff10a4165", 00:35:00.498 "is_configured": true, 00:35:00.498 "data_offset": 0, 00:35:00.498 "data_size": 65536 00:35:00.498 }, 00:35:00.498 { 00:35:00.498 "name": null, 00:35:00.498 "uuid": "af0ec7e6-917f-45ba-bca2-8245f8aadc5e", 00:35:00.498 "is_configured": false, 00:35:00.498 "data_offset": 0, 00:35:00.498 "data_size": 65536 00:35:00.498 }, 00:35:00.498 { 00:35:00.498 "name": "BaseBdev3", 00:35:00.498 "uuid": "3c1fba5f-3203-4760-8dc3-1bf0b94372ba", 00:35:00.498 "is_configured": true, 00:35:00.498 "data_offset": 0, 00:35:00.498 "data_size": 65536 00:35:00.498 }, 00:35:00.498 { 00:35:00.498 "name": "BaseBdev4", 00:35:00.498 "uuid": "82272ca1-16b7-4d39-ba80-7468af7c141b", 00:35:00.498 "is_configured": true, 00:35:00.498 "data_offset": 0, 00:35:00.498 "data_size": 65536 00:35:00.498 } 00:35:00.498 ] 00:35:00.498 }' 00:35:00.498 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:00.498 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.066 [2024-12-09 23:16:41.541137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:01.066 "name": "Existed_Raid", 00:35:01.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.066 "strip_size_kb": 64, 00:35:01.066 "state": "configuring", 00:35:01.066 "raid_level": "raid0", 00:35:01.066 "superblock": false, 00:35:01.066 "num_base_bdevs": 4, 00:35:01.066 "num_base_bdevs_discovered": 2, 00:35:01.066 "num_base_bdevs_operational": 4, 00:35:01.066 "base_bdevs_list": [ 00:35:01.066 { 00:35:01.066 "name": "BaseBdev1", 00:35:01.066 "uuid": "f2e7a215-57cc-4cca-a77a-2caff10a4165", 00:35:01.066 "is_configured": true, 00:35:01.066 "data_offset": 0, 00:35:01.066 "data_size": 65536 00:35:01.066 }, 00:35:01.066 { 00:35:01.066 "name": null, 00:35:01.066 "uuid": "af0ec7e6-917f-45ba-bca2-8245f8aadc5e", 00:35:01.066 "is_configured": false, 00:35:01.066 "data_offset": 0, 00:35:01.066 "data_size": 65536 00:35:01.066 }, 00:35:01.066 { 00:35:01.066 "name": null, 00:35:01.066 "uuid": "3c1fba5f-3203-4760-8dc3-1bf0b94372ba", 00:35:01.066 "is_configured": false, 00:35:01.066 "data_offset": 0, 00:35:01.066 "data_size": 65536 00:35:01.066 }, 00:35:01.066 { 00:35:01.066 "name": "BaseBdev4", 00:35:01.066 "uuid": "82272ca1-16b7-4d39-ba80-7468af7c141b", 00:35:01.066 "is_configured": true, 00:35:01.066 "data_offset": 0, 00:35:01.066 "data_size": 65536 00:35:01.066 } 00:35:01.066 ] 00:35:01.066 }' 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:01.066 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.634 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.634 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.634 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.634 23:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:01.634 23:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.634 [2024-12-09 23:16:42.028513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.634 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.635 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:01.635 "name": "Existed_Raid", 00:35:01.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.635 "strip_size_kb": 64, 00:35:01.635 "state": "configuring", 00:35:01.635 "raid_level": "raid0", 00:35:01.635 "superblock": false, 00:35:01.635 "num_base_bdevs": 4, 00:35:01.635 "num_base_bdevs_discovered": 3, 00:35:01.635 "num_base_bdevs_operational": 4, 00:35:01.635 "base_bdevs_list": [ 00:35:01.635 { 00:35:01.635 "name": "BaseBdev1", 00:35:01.635 "uuid": "f2e7a215-57cc-4cca-a77a-2caff10a4165", 00:35:01.635 "is_configured": true, 00:35:01.635 "data_offset": 0, 00:35:01.635 "data_size": 65536 00:35:01.635 }, 00:35:01.635 { 00:35:01.635 "name": null, 00:35:01.635 "uuid": "af0ec7e6-917f-45ba-bca2-8245f8aadc5e", 00:35:01.635 "is_configured": false, 00:35:01.635 "data_offset": 0, 00:35:01.635 "data_size": 65536 00:35:01.635 }, 00:35:01.635 { 00:35:01.635 "name": "BaseBdev3", 00:35:01.635 "uuid": "3c1fba5f-3203-4760-8dc3-1bf0b94372ba", 00:35:01.635 "is_configured": true, 00:35:01.635 "data_offset": 0, 00:35:01.635 "data_size": 65536 00:35:01.635 }, 00:35:01.635 { 00:35:01.635 "name": "BaseBdev4", 00:35:01.635 "uuid": "82272ca1-16b7-4d39-ba80-7468af7c141b", 00:35:01.635 "is_configured": true, 00:35:01.635 "data_offset": 0, 00:35:01.635 "data_size": 65536 00:35:01.635 } 00:35:01.635 ] 00:35:01.635 }' 00:35:01.635 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:01.635 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.893 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:01.893 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.893 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.893 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.893 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.153 [2024-12-09 23:16:42.535836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:02.153 "name": "Existed_Raid", 00:35:02.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.153 "strip_size_kb": 64, 00:35:02.153 "state": "configuring", 00:35:02.153 "raid_level": "raid0", 00:35:02.153 "superblock": false, 00:35:02.153 "num_base_bdevs": 4, 00:35:02.153 "num_base_bdevs_discovered": 2, 00:35:02.153 "num_base_bdevs_operational": 4, 00:35:02.153 "base_bdevs_list": [ 00:35:02.153 { 00:35:02.153 "name": null, 00:35:02.153 "uuid": "f2e7a215-57cc-4cca-a77a-2caff10a4165", 00:35:02.153 "is_configured": false, 00:35:02.153 "data_offset": 0, 00:35:02.153 "data_size": 65536 00:35:02.153 }, 00:35:02.153 { 00:35:02.153 "name": null, 00:35:02.153 "uuid": "af0ec7e6-917f-45ba-bca2-8245f8aadc5e", 00:35:02.153 "is_configured": false, 00:35:02.153 "data_offset": 0, 00:35:02.153 "data_size": 65536 00:35:02.153 }, 00:35:02.153 { 00:35:02.153 "name": "BaseBdev3", 00:35:02.153 "uuid": "3c1fba5f-3203-4760-8dc3-1bf0b94372ba", 00:35:02.153 "is_configured": true, 00:35:02.153 "data_offset": 0, 00:35:02.153 "data_size": 65536 00:35:02.153 }, 00:35:02.153 { 00:35:02.153 "name": "BaseBdev4", 00:35:02.153 "uuid": "82272ca1-16b7-4d39-ba80-7468af7c141b", 00:35:02.153 "is_configured": true, 00:35:02.153 "data_offset": 0, 00:35:02.153 "data_size": 65536 00:35:02.153 } 00:35:02.153 ] 00:35:02.153 }' 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:02.153 23:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.412 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.412 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.412 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:02.412 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.671 [2024-12-09 23:16:43.084465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:02.671 "name": "Existed_Raid", 00:35:02.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.671 "strip_size_kb": 64, 00:35:02.671 "state": "configuring", 00:35:02.671 "raid_level": "raid0", 00:35:02.671 "superblock": false, 00:35:02.671 "num_base_bdevs": 4, 00:35:02.671 "num_base_bdevs_discovered": 3, 00:35:02.671 "num_base_bdevs_operational": 4, 00:35:02.671 "base_bdevs_list": [ 00:35:02.671 { 00:35:02.671 "name": null, 00:35:02.671 "uuid": "f2e7a215-57cc-4cca-a77a-2caff10a4165", 00:35:02.671 "is_configured": false, 00:35:02.671 "data_offset": 0, 00:35:02.671 "data_size": 65536 00:35:02.671 }, 00:35:02.671 { 00:35:02.671 "name": "BaseBdev2", 00:35:02.671 "uuid": "af0ec7e6-917f-45ba-bca2-8245f8aadc5e", 00:35:02.671 "is_configured": true, 00:35:02.671 "data_offset": 0, 00:35:02.671 "data_size": 65536 00:35:02.671 }, 00:35:02.671 { 00:35:02.671 "name": "BaseBdev3", 00:35:02.671 "uuid": "3c1fba5f-3203-4760-8dc3-1bf0b94372ba", 00:35:02.671 "is_configured": true, 00:35:02.671 "data_offset": 0, 00:35:02.671 "data_size": 65536 00:35:02.671 }, 00:35:02.671 { 00:35:02.671 "name": "BaseBdev4", 00:35:02.671 "uuid": "82272ca1-16b7-4d39-ba80-7468af7c141b", 00:35:02.671 "is_configured": true, 00:35:02.671 "data_offset": 0, 00:35:02.671 "data_size": 65536 00:35:02.671 } 00:35:02.671 ] 00:35:02.671 }' 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:02.671 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f2e7a215-57cc-4cca-a77a-2caff10a4165 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.987 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.246 [2024-12-09 23:16:43.640516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:03.246 [2024-12-09 23:16:43.640595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:03.246 [2024-12-09 23:16:43.640613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:35:03.246 [2024-12-09 23:16:43.641001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:03.246 [2024-12-09 23:16:43.641203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:03.246 [2024-12-09 23:16:43.641230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:35:03.246 [2024-12-09 23:16:43.641582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:03.246 NewBaseBdev 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.246 [ 00:35:03.246 { 00:35:03.246 "name": "NewBaseBdev", 00:35:03.246 "aliases": [ 00:35:03.246 "f2e7a215-57cc-4cca-a77a-2caff10a4165" 00:35:03.246 ], 00:35:03.246 "product_name": "Malloc disk", 00:35:03.246 "block_size": 512, 00:35:03.246 "num_blocks": 65536, 00:35:03.246 "uuid": "f2e7a215-57cc-4cca-a77a-2caff10a4165", 00:35:03.246 "assigned_rate_limits": { 00:35:03.246 "rw_ios_per_sec": 0, 00:35:03.246 "rw_mbytes_per_sec": 0, 00:35:03.246 "r_mbytes_per_sec": 0, 00:35:03.246 "w_mbytes_per_sec": 0 00:35:03.246 }, 00:35:03.246 "claimed": true, 00:35:03.246 "claim_type": "exclusive_write", 00:35:03.246 "zoned": false, 00:35:03.246 "supported_io_types": { 00:35:03.246 "read": true, 00:35:03.246 "write": true, 00:35:03.246 "unmap": true, 00:35:03.246 "flush": true, 00:35:03.246 "reset": true, 00:35:03.246 "nvme_admin": false, 00:35:03.246 "nvme_io": false, 00:35:03.246 "nvme_io_md": false, 00:35:03.246 "write_zeroes": true, 00:35:03.246 "zcopy": true, 00:35:03.246 "get_zone_info": false, 00:35:03.246 "zone_management": false, 00:35:03.246 "zone_append": false, 00:35:03.246 "compare": false, 00:35:03.246 "compare_and_write": false, 00:35:03.246 "abort": true, 00:35:03.246 "seek_hole": false, 00:35:03.246 "seek_data": false, 00:35:03.246 "copy": true, 00:35:03.246 "nvme_iov_md": false 00:35:03.246 }, 00:35:03.246 "memory_domains": [ 00:35:03.246 { 00:35:03.246 "dma_device_id": "system", 00:35:03.246 "dma_device_type": 1 00:35:03.246 }, 00:35:03.246 { 00:35:03.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:03.246 "dma_device_type": 2 00:35:03.246 } 00:35:03.246 ], 00:35:03.246 "driver_specific": {} 00:35:03.246 } 00:35:03.246 ] 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.246 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:03.246 "name": "Existed_Raid", 00:35:03.246 "uuid": "69786498-b599-4c0c-ae92-9d9398aa990a", 00:35:03.246 "strip_size_kb": 64, 00:35:03.246 "state": "online", 00:35:03.246 "raid_level": "raid0", 00:35:03.246 "superblock": false, 00:35:03.246 "num_base_bdevs": 4, 00:35:03.246 "num_base_bdevs_discovered": 4, 00:35:03.246 "num_base_bdevs_operational": 4, 00:35:03.246 "base_bdevs_list": [ 00:35:03.246 { 00:35:03.246 "name": "NewBaseBdev", 00:35:03.246 "uuid": "f2e7a215-57cc-4cca-a77a-2caff10a4165", 00:35:03.246 "is_configured": true, 00:35:03.246 "data_offset": 0, 00:35:03.246 "data_size": 65536 00:35:03.246 }, 00:35:03.246 { 00:35:03.246 "name": "BaseBdev2", 00:35:03.246 "uuid": "af0ec7e6-917f-45ba-bca2-8245f8aadc5e", 00:35:03.246 "is_configured": true, 00:35:03.246 "data_offset": 0, 00:35:03.246 "data_size": 65536 00:35:03.246 }, 00:35:03.246 { 00:35:03.246 "name": "BaseBdev3", 00:35:03.247 "uuid": "3c1fba5f-3203-4760-8dc3-1bf0b94372ba", 00:35:03.247 "is_configured": true, 00:35:03.247 "data_offset": 0, 00:35:03.247 "data_size": 65536 00:35:03.247 }, 00:35:03.247 { 00:35:03.247 "name": "BaseBdev4", 00:35:03.247 "uuid": "82272ca1-16b7-4d39-ba80-7468af7c141b", 00:35:03.247 "is_configured": true, 00:35:03.247 "data_offset": 0, 00:35:03.247 "data_size": 65536 00:35:03.247 } 00:35:03.247 ] 00:35:03.247 }' 00:35:03.247 23:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:03.247 23:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:03.505 [2024-12-09 23:16:44.080244] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:03.505 "name": "Existed_Raid", 00:35:03.505 "aliases": [ 00:35:03.505 "69786498-b599-4c0c-ae92-9d9398aa990a" 00:35:03.505 ], 00:35:03.505 "product_name": "Raid Volume", 00:35:03.505 "block_size": 512, 00:35:03.505 "num_blocks": 262144, 00:35:03.505 "uuid": "69786498-b599-4c0c-ae92-9d9398aa990a", 00:35:03.505 "assigned_rate_limits": { 00:35:03.505 "rw_ios_per_sec": 0, 00:35:03.505 "rw_mbytes_per_sec": 0, 00:35:03.505 "r_mbytes_per_sec": 0, 00:35:03.505 "w_mbytes_per_sec": 0 00:35:03.505 }, 00:35:03.505 "claimed": false, 00:35:03.505 "zoned": false, 00:35:03.505 "supported_io_types": { 00:35:03.505 "read": true, 00:35:03.505 "write": true, 00:35:03.505 "unmap": true, 00:35:03.505 "flush": true, 00:35:03.505 "reset": true, 00:35:03.505 "nvme_admin": false, 00:35:03.505 "nvme_io": false, 00:35:03.505 "nvme_io_md": false, 00:35:03.505 "write_zeroes": true, 00:35:03.505 "zcopy": false, 00:35:03.505 "get_zone_info": false, 00:35:03.505 "zone_management": false, 00:35:03.505 "zone_append": false, 00:35:03.505 "compare": false, 00:35:03.505 "compare_and_write": false, 00:35:03.505 "abort": false, 00:35:03.505 "seek_hole": false, 00:35:03.505 "seek_data": false, 00:35:03.505 "copy": false, 00:35:03.505 "nvme_iov_md": false 00:35:03.505 }, 00:35:03.505 "memory_domains": [ 00:35:03.505 { 00:35:03.505 "dma_device_id": "system", 00:35:03.505 "dma_device_type": 1 00:35:03.505 }, 00:35:03.505 { 00:35:03.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:03.505 "dma_device_type": 2 00:35:03.505 }, 00:35:03.505 { 00:35:03.505 "dma_device_id": "system", 00:35:03.505 "dma_device_type": 1 00:35:03.505 }, 00:35:03.505 { 00:35:03.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:03.505 "dma_device_type": 2 00:35:03.505 }, 00:35:03.505 { 00:35:03.505 "dma_device_id": "system", 00:35:03.505 "dma_device_type": 1 00:35:03.505 }, 00:35:03.505 { 00:35:03.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:03.505 "dma_device_type": 2 00:35:03.505 }, 00:35:03.505 { 00:35:03.505 "dma_device_id": "system", 00:35:03.505 "dma_device_type": 1 00:35:03.505 }, 00:35:03.505 { 00:35:03.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:03.505 "dma_device_type": 2 00:35:03.505 } 00:35:03.505 ], 00:35:03.505 "driver_specific": { 00:35:03.505 "raid": { 00:35:03.505 "uuid": "69786498-b599-4c0c-ae92-9d9398aa990a", 00:35:03.505 "strip_size_kb": 64, 00:35:03.505 "state": "online", 00:35:03.505 "raid_level": "raid0", 00:35:03.505 "superblock": false, 00:35:03.505 "num_base_bdevs": 4, 00:35:03.505 "num_base_bdevs_discovered": 4, 00:35:03.505 "num_base_bdevs_operational": 4, 00:35:03.505 "base_bdevs_list": [ 00:35:03.505 { 00:35:03.505 "name": "NewBaseBdev", 00:35:03.505 "uuid": "f2e7a215-57cc-4cca-a77a-2caff10a4165", 00:35:03.505 "is_configured": true, 00:35:03.505 "data_offset": 0, 00:35:03.505 "data_size": 65536 00:35:03.505 }, 00:35:03.505 { 00:35:03.505 "name": "BaseBdev2", 00:35:03.505 "uuid": "af0ec7e6-917f-45ba-bca2-8245f8aadc5e", 00:35:03.505 "is_configured": true, 00:35:03.505 "data_offset": 0, 00:35:03.505 "data_size": 65536 00:35:03.505 }, 00:35:03.505 { 00:35:03.505 "name": "BaseBdev3", 00:35:03.505 "uuid": "3c1fba5f-3203-4760-8dc3-1bf0b94372ba", 00:35:03.505 "is_configured": true, 00:35:03.505 "data_offset": 0, 00:35:03.505 "data_size": 65536 00:35:03.505 }, 00:35:03.505 { 00:35:03.505 "name": "BaseBdev4", 00:35:03.505 "uuid": "82272ca1-16b7-4d39-ba80-7468af7c141b", 00:35:03.505 "is_configured": true, 00:35:03.505 "data_offset": 0, 00:35:03.505 "data_size": 65536 00:35:03.505 } 00:35:03.505 ] 00:35:03.505 } 00:35:03.505 } 00:35:03.505 }' 00:35:03.505 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:35:03.764 BaseBdev2 00:35:03.764 BaseBdev3 00:35:03.764 BaseBdev4' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:03.764 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.023 [2024-12-09 23:16:44.419470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:04.023 [2024-12-09 23:16:44.419624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:04.023 [2024-12-09 23:16:44.419732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:04.023 [2024-12-09 23:16:44.419808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:04.023 [2024-12-09 23:16:44.419822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69251 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69251 ']' 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69251 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69251 00:35:04.023 killing process with pid 69251 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69251' 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69251 00:35:04.023 [2024-12-09 23:16:44.468290] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:04.023 23:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69251 00:35:04.282 [2024-12-09 23:16:44.869268] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:35:05.657 00:35:05.657 real 0m11.479s 00:35:05.657 user 0m18.196s 00:35:05.657 sys 0m2.304s 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.657 ************************************ 00:35:05.657 END TEST raid_state_function_test 00:35:05.657 ************************************ 00:35:05.657 23:16:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:35:05.657 23:16:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:05.657 23:16:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.657 23:16:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:05.657 ************************************ 00:35:05.657 START TEST raid_state_function_test_sb 00:35:05.657 ************************************ 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69917 00:35:05.657 Process raid pid: 69917 00:35:05.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69917' 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69917 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69917 ']' 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.657 23:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.657 [2024-12-09 23:16:46.200548] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:05.657 [2024-12-09 23:16:46.200706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.915 [2024-12-09 23:16:46.396282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.915 [2024-12-09 23:16:46.519659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.173 [2024-12-09 23:16:46.736221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:06.173 [2024-12-09 23:16:46.736270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.745 [2024-12-09 23:16:47.118765] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:06.745 [2024-12-09 23:16:47.118830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:06.745 [2024-12-09 23:16:47.118847] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:06.745 [2024-12-09 23:16:47.118866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:06.745 [2024-12-09 23:16:47.118877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:06.745 [2024-12-09 23:16:47.118894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:06.745 [2024-12-09 23:16:47.118906] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:06.745 [2024-12-09 23:16:47.118923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:06.745 "name": "Existed_Raid", 00:35:06.745 "uuid": "6107604c-c64a-40f4-8b03-c3085521e381", 00:35:06.745 "strip_size_kb": 64, 00:35:06.745 "state": "configuring", 00:35:06.745 "raid_level": "raid0", 00:35:06.745 "superblock": true, 00:35:06.745 "num_base_bdevs": 4, 00:35:06.745 "num_base_bdevs_discovered": 0, 00:35:06.745 "num_base_bdevs_operational": 4, 00:35:06.745 "base_bdevs_list": [ 00:35:06.745 { 00:35:06.745 "name": "BaseBdev1", 00:35:06.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.745 "is_configured": false, 00:35:06.745 "data_offset": 0, 00:35:06.745 "data_size": 0 00:35:06.745 }, 00:35:06.745 { 00:35:06.745 "name": "BaseBdev2", 00:35:06.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.745 "is_configured": false, 00:35:06.745 "data_offset": 0, 00:35:06.745 "data_size": 0 00:35:06.745 }, 00:35:06.745 { 00:35:06.745 "name": "BaseBdev3", 00:35:06.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.745 "is_configured": false, 00:35:06.745 "data_offset": 0, 00:35:06.745 "data_size": 0 00:35:06.745 }, 00:35:06.745 { 00:35:06.745 "name": "BaseBdev4", 00:35:06.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.745 "is_configured": false, 00:35:06.745 "data_offset": 0, 00:35:06.745 "data_size": 0 00:35:06.745 } 00:35:06.745 ] 00:35:06.745 }' 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:06.745 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.006 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:07.006 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.006 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.006 [2024-12-09 23:16:47.542289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:07.006 [2024-12-09 23:16:47.542332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:07.006 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.006 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:07.006 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.006 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.006 [2024-12-09 23:16:47.550284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:07.006 [2024-12-09 23:16:47.550470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:07.006 [2024-12-09 23:16:47.550500] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:07.006 [2024-12-09 23:16:47.550523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:07.006 [2024-12-09 23:16:47.550535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:07.006 [2024-12-09 23:16:47.550553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:07.006 [2024-12-09 23:16:47.550565] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:07.006 [2024-12-09 23:16:47.550586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:07.006 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.006 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:07.006 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.007 [2024-12-09 23:16:47.595833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:07.007 BaseBdev1 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.007 [ 00:35:07.007 { 00:35:07.007 "name": "BaseBdev1", 00:35:07.007 "aliases": [ 00:35:07.007 "2f0783d4-21fc-4b14-8a97-286eeb63d82f" 00:35:07.007 ], 00:35:07.007 "product_name": "Malloc disk", 00:35:07.007 "block_size": 512, 00:35:07.007 "num_blocks": 65536, 00:35:07.007 "uuid": "2f0783d4-21fc-4b14-8a97-286eeb63d82f", 00:35:07.007 "assigned_rate_limits": { 00:35:07.007 "rw_ios_per_sec": 0, 00:35:07.007 "rw_mbytes_per_sec": 0, 00:35:07.007 "r_mbytes_per_sec": 0, 00:35:07.007 "w_mbytes_per_sec": 0 00:35:07.007 }, 00:35:07.007 "claimed": true, 00:35:07.007 "claim_type": "exclusive_write", 00:35:07.007 "zoned": false, 00:35:07.007 "supported_io_types": { 00:35:07.007 "read": true, 00:35:07.007 "write": true, 00:35:07.007 "unmap": true, 00:35:07.007 "flush": true, 00:35:07.007 "reset": true, 00:35:07.007 "nvme_admin": false, 00:35:07.007 "nvme_io": false, 00:35:07.007 "nvme_io_md": false, 00:35:07.007 "write_zeroes": true, 00:35:07.007 "zcopy": true, 00:35:07.007 "get_zone_info": false, 00:35:07.007 "zone_management": false, 00:35:07.007 "zone_append": false, 00:35:07.007 "compare": false, 00:35:07.007 "compare_and_write": false, 00:35:07.007 "abort": true, 00:35:07.007 "seek_hole": false, 00:35:07.007 "seek_data": false, 00:35:07.007 "copy": true, 00:35:07.007 "nvme_iov_md": false 00:35:07.007 }, 00:35:07.007 "memory_domains": [ 00:35:07.007 { 00:35:07.007 "dma_device_id": "system", 00:35:07.007 "dma_device_type": 1 00:35:07.007 }, 00:35:07.007 { 00:35:07.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:07.007 "dma_device_type": 2 00:35:07.007 } 00:35:07.007 ], 00:35:07.007 "driver_specific": {} 00:35:07.007 } 00:35:07.007 ] 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:07.007 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.278 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.278 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:07.278 "name": "Existed_Raid", 00:35:07.278 "uuid": "89e9276c-3bac-4475-998e-a9941c28ea3b", 00:35:07.278 "strip_size_kb": 64, 00:35:07.278 "state": "configuring", 00:35:07.278 "raid_level": "raid0", 00:35:07.278 "superblock": true, 00:35:07.278 "num_base_bdevs": 4, 00:35:07.278 "num_base_bdevs_discovered": 1, 00:35:07.278 "num_base_bdevs_operational": 4, 00:35:07.278 "base_bdevs_list": [ 00:35:07.278 { 00:35:07.278 "name": "BaseBdev1", 00:35:07.278 "uuid": "2f0783d4-21fc-4b14-8a97-286eeb63d82f", 00:35:07.278 "is_configured": true, 00:35:07.278 "data_offset": 2048, 00:35:07.278 "data_size": 63488 00:35:07.278 }, 00:35:07.278 { 00:35:07.278 "name": "BaseBdev2", 00:35:07.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.278 "is_configured": false, 00:35:07.278 "data_offset": 0, 00:35:07.278 "data_size": 0 00:35:07.278 }, 00:35:07.278 { 00:35:07.278 "name": "BaseBdev3", 00:35:07.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.278 "is_configured": false, 00:35:07.278 "data_offset": 0, 00:35:07.278 "data_size": 0 00:35:07.278 }, 00:35:07.278 { 00:35:07.278 "name": "BaseBdev4", 00:35:07.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.278 "is_configured": false, 00:35:07.278 "data_offset": 0, 00:35:07.278 "data_size": 0 00:35:07.278 } 00:35:07.278 ] 00:35:07.278 }' 00:35:07.278 23:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:07.278 23:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.537 [2024-12-09 23:16:48.055348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:07.537 [2024-12-09 23:16:48.055461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.537 [2024-12-09 23:16:48.063454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:07.537 [2024-12-09 23:16:48.065701] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:07.537 [2024-12-09 23:16:48.065783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:07.537 [2024-12-09 23:16:48.065951] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:07.537 [2024-12-09 23:16:48.066029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:07.537 [2024-12-09 23:16:48.066085] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:07.537 [2024-12-09 23:16:48.066270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:07.537 "name": "Existed_Raid", 00:35:07.537 "uuid": "ce0e3a28-bda5-4630-8d4a-a6f24860cc65", 00:35:07.537 "strip_size_kb": 64, 00:35:07.537 "state": "configuring", 00:35:07.537 "raid_level": "raid0", 00:35:07.537 "superblock": true, 00:35:07.537 "num_base_bdevs": 4, 00:35:07.537 "num_base_bdevs_discovered": 1, 00:35:07.537 "num_base_bdevs_operational": 4, 00:35:07.537 "base_bdevs_list": [ 00:35:07.537 { 00:35:07.537 "name": "BaseBdev1", 00:35:07.537 "uuid": "2f0783d4-21fc-4b14-8a97-286eeb63d82f", 00:35:07.537 "is_configured": true, 00:35:07.537 "data_offset": 2048, 00:35:07.537 "data_size": 63488 00:35:07.537 }, 00:35:07.537 { 00:35:07.537 "name": "BaseBdev2", 00:35:07.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.537 "is_configured": false, 00:35:07.537 "data_offset": 0, 00:35:07.537 "data_size": 0 00:35:07.537 }, 00:35:07.537 { 00:35:07.537 "name": "BaseBdev3", 00:35:07.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.537 "is_configured": false, 00:35:07.537 "data_offset": 0, 00:35:07.537 "data_size": 0 00:35:07.537 }, 00:35:07.537 { 00:35:07.537 "name": "BaseBdev4", 00:35:07.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.537 "is_configured": false, 00:35:07.537 "data_offset": 0, 00:35:07.537 "data_size": 0 00:35:07.537 } 00:35:07.537 ] 00:35:07.537 }' 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:07.537 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.105 [2024-12-09 23:16:48.551340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:08.105 BaseBdev2 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.105 [ 00:35:08.105 { 00:35:08.105 "name": "BaseBdev2", 00:35:08.105 "aliases": [ 00:35:08.105 "799f2028-ac77-4c19-aa14-9dd8ec5f574e" 00:35:08.105 ], 00:35:08.105 "product_name": "Malloc disk", 00:35:08.105 "block_size": 512, 00:35:08.105 "num_blocks": 65536, 00:35:08.105 "uuid": "799f2028-ac77-4c19-aa14-9dd8ec5f574e", 00:35:08.105 "assigned_rate_limits": { 00:35:08.105 "rw_ios_per_sec": 0, 00:35:08.105 "rw_mbytes_per_sec": 0, 00:35:08.105 "r_mbytes_per_sec": 0, 00:35:08.105 "w_mbytes_per_sec": 0 00:35:08.105 }, 00:35:08.105 "claimed": true, 00:35:08.105 "claim_type": "exclusive_write", 00:35:08.105 "zoned": false, 00:35:08.105 "supported_io_types": { 00:35:08.105 "read": true, 00:35:08.105 "write": true, 00:35:08.105 "unmap": true, 00:35:08.105 "flush": true, 00:35:08.105 "reset": true, 00:35:08.105 "nvme_admin": false, 00:35:08.105 "nvme_io": false, 00:35:08.105 "nvme_io_md": false, 00:35:08.105 "write_zeroes": true, 00:35:08.105 "zcopy": true, 00:35:08.105 "get_zone_info": false, 00:35:08.105 "zone_management": false, 00:35:08.105 "zone_append": false, 00:35:08.105 "compare": false, 00:35:08.105 "compare_and_write": false, 00:35:08.105 "abort": true, 00:35:08.105 "seek_hole": false, 00:35:08.105 "seek_data": false, 00:35:08.105 "copy": true, 00:35:08.105 "nvme_iov_md": false 00:35:08.105 }, 00:35:08.105 "memory_domains": [ 00:35:08.105 { 00:35:08.105 "dma_device_id": "system", 00:35:08.105 "dma_device_type": 1 00:35:08.105 }, 00:35:08.105 { 00:35:08.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:08.105 "dma_device_type": 2 00:35:08.105 } 00:35:08.105 ], 00:35:08.105 "driver_specific": {} 00:35:08.105 } 00:35:08.105 ] 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:08.105 "name": "Existed_Raid", 00:35:08.105 "uuid": "ce0e3a28-bda5-4630-8d4a-a6f24860cc65", 00:35:08.105 "strip_size_kb": 64, 00:35:08.105 "state": "configuring", 00:35:08.105 "raid_level": "raid0", 00:35:08.105 "superblock": true, 00:35:08.105 "num_base_bdevs": 4, 00:35:08.105 "num_base_bdevs_discovered": 2, 00:35:08.105 "num_base_bdevs_operational": 4, 00:35:08.105 "base_bdevs_list": [ 00:35:08.105 { 00:35:08.105 "name": "BaseBdev1", 00:35:08.105 "uuid": "2f0783d4-21fc-4b14-8a97-286eeb63d82f", 00:35:08.105 "is_configured": true, 00:35:08.105 "data_offset": 2048, 00:35:08.105 "data_size": 63488 00:35:08.105 }, 00:35:08.105 { 00:35:08.105 "name": "BaseBdev2", 00:35:08.105 "uuid": "799f2028-ac77-4c19-aa14-9dd8ec5f574e", 00:35:08.105 "is_configured": true, 00:35:08.105 "data_offset": 2048, 00:35:08.105 "data_size": 63488 00:35:08.105 }, 00:35:08.105 { 00:35:08.105 "name": "BaseBdev3", 00:35:08.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.105 "is_configured": false, 00:35:08.105 "data_offset": 0, 00:35:08.105 "data_size": 0 00:35:08.105 }, 00:35:08.105 { 00:35:08.105 "name": "BaseBdev4", 00:35:08.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.105 "is_configured": false, 00:35:08.105 "data_offset": 0, 00:35:08.105 "data_size": 0 00:35:08.105 } 00:35:08.105 ] 00:35:08.105 }' 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:08.105 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.364 23:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:08.364 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.364 23:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.623 [2024-12-09 23:16:49.049841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:08.623 BaseBdev3 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.623 [ 00:35:08.623 { 00:35:08.623 "name": "BaseBdev3", 00:35:08.623 "aliases": [ 00:35:08.623 "074008da-a4c8-43a4-914a-305b77d9dc10" 00:35:08.623 ], 00:35:08.623 "product_name": "Malloc disk", 00:35:08.623 "block_size": 512, 00:35:08.623 "num_blocks": 65536, 00:35:08.623 "uuid": "074008da-a4c8-43a4-914a-305b77d9dc10", 00:35:08.623 "assigned_rate_limits": { 00:35:08.623 "rw_ios_per_sec": 0, 00:35:08.623 "rw_mbytes_per_sec": 0, 00:35:08.623 "r_mbytes_per_sec": 0, 00:35:08.623 "w_mbytes_per_sec": 0 00:35:08.623 }, 00:35:08.623 "claimed": true, 00:35:08.623 "claim_type": "exclusive_write", 00:35:08.623 "zoned": false, 00:35:08.623 "supported_io_types": { 00:35:08.623 "read": true, 00:35:08.623 "write": true, 00:35:08.623 "unmap": true, 00:35:08.623 "flush": true, 00:35:08.623 "reset": true, 00:35:08.623 "nvme_admin": false, 00:35:08.623 "nvme_io": false, 00:35:08.623 "nvme_io_md": false, 00:35:08.623 "write_zeroes": true, 00:35:08.623 "zcopy": true, 00:35:08.623 "get_zone_info": false, 00:35:08.623 "zone_management": false, 00:35:08.623 "zone_append": false, 00:35:08.623 "compare": false, 00:35:08.623 "compare_and_write": false, 00:35:08.623 "abort": true, 00:35:08.623 "seek_hole": false, 00:35:08.623 "seek_data": false, 00:35:08.623 "copy": true, 00:35:08.623 "nvme_iov_md": false 00:35:08.623 }, 00:35:08.623 "memory_domains": [ 00:35:08.623 { 00:35:08.623 "dma_device_id": "system", 00:35:08.623 "dma_device_type": 1 00:35:08.623 }, 00:35:08.623 { 00:35:08.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:08.623 "dma_device_type": 2 00:35:08.623 } 00:35:08.623 ], 00:35:08.623 "driver_specific": {} 00:35:08.623 } 00:35:08.623 ] 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:08.623 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:08.624 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.624 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.624 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.624 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.624 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:08.624 "name": "Existed_Raid", 00:35:08.624 "uuid": "ce0e3a28-bda5-4630-8d4a-a6f24860cc65", 00:35:08.624 "strip_size_kb": 64, 00:35:08.624 "state": "configuring", 00:35:08.624 "raid_level": "raid0", 00:35:08.624 "superblock": true, 00:35:08.624 "num_base_bdevs": 4, 00:35:08.624 "num_base_bdevs_discovered": 3, 00:35:08.624 "num_base_bdevs_operational": 4, 00:35:08.624 "base_bdevs_list": [ 00:35:08.624 { 00:35:08.624 "name": "BaseBdev1", 00:35:08.624 "uuid": "2f0783d4-21fc-4b14-8a97-286eeb63d82f", 00:35:08.624 "is_configured": true, 00:35:08.624 "data_offset": 2048, 00:35:08.624 "data_size": 63488 00:35:08.624 }, 00:35:08.624 { 00:35:08.624 "name": "BaseBdev2", 00:35:08.624 "uuid": "799f2028-ac77-4c19-aa14-9dd8ec5f574e", 00:35:08.624 "is_configured": true, 00:35:08.624 "data_offset": 2048, 00:35:08.624 "data_size": 63488 00:35:08.624 }, 00:35:08.624 { 00:35:08.624 "name": "BaseBdev3", 00:35:08.624 "uuid": "074008da-a4c8-43a4-914a-305b77d9dc10", 00:35:08.624 "is_configured": true, 00:35:08.624 "data_offset": 2048, 00:35:08.624 "data_size": 63488 00:35:08.624 }, 00:35:08.624 { 00:35:08.624 "name": "BaseBdev4", 00:35:08.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.624 "is_configured": false, 00:35:08.624 "data_offset": 0, 00:35:08.624 "data_size": 0 00:35:08.624 } 00:35:08.624 ] 00:35:08.624 }' 00:35:08.624 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:08.624 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.882 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:08.882 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.882 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.140 [2024-12-09 23:16:49.521927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:09.140 [2024-12-09 23:16:49.522442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:09.140 BaseBdev4 00:35:09.140 [2024-12-09 23:16:49.522572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:09.140 [2024-12-09 23:16:49.522971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:09.140 [2024-12-09 23:16:49.523161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:09.140 [2024-12-09 23:16:49.523183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.140 [2024-12-09 23:16:49.523371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.140 [ 00:35:09.140 { 00:35:09.140 "name": "BaseBdev4", 00:35:09.140 "aliases": [ 00:35:09.140 "293665ac-9455-47a0-98e6-d152ea371202" 00:35:09.140 ], 00:35:09.140 "product_name": "Malloc disk", 00:35:09.140 "block_size": 512, 00:35:09.140 "num_blocks": 65536, 00:35:09.140 "uuid": "293665ac-9455-47a0-98e6-d152ea371202", 00:35:09.140 "assigned_rate_limits": { 00:35:09.140 "rw_ios_per_sec": 0, 00:35:09.140 "rw_mbytes_per_sec": 0, 00:35:09.140 "r_mbytes_per_sec": 0, 00:35:09.140 "w_mbytes_per_sec": 0 00:35:09.140 }, 00:35:09.140 "claimed": true, 00:35:09.140 "claim_type": "exclusive_write", 00:35:09.140 "zoned": false, 00:35:09.140 "supported_io_types": { 00:35:09.140 "read": true, 00:35:09.140 "write": true, 00:35:09.140 "unmap": true, 00:35:09.140 "flush": true, 00:35:09.140 "reset": true, 00:35:09.140 "nvme_admin": false, 00:35:09.140 "nvme_io": false, 00:35:09.140 "nvme_io_md": false, 00:35:09.140 "write_zeroes": true, 00:35:09.140 "zcopy": true, 00:35:09.140 "get_zone_info": false, 00:35:09.140 "zone_management": false, 00:35:09.140 "zone_append": false, 00:35:09.140 "compare": false, 00:35:09.140 "compare_and_write": false, 00:35:09.140 "abort": true, 00:35:09.140 "seek_hole": false, 00:35:09.140 "seek_data": false, 00:35:09.140 "copy": true, 00:35:09.140 "nvme_iov_md": false 00:35:09.140 }, 00:35:09.140 "memory_domains": [ 00:35:09.140 { 00:35:09.140 "dma_device_id": "system", 00:35:09.140 "dma_device_type": 1 00:35:09.140 }, 00:35:09.140 { 00:35:09.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:09.140 "dma_device_type": 2 00:35:09.140 } 00:35:09.140 ], 00:35:09.140 "driver_specific": {} 00:35:09.140 } 00:35:09.140 ] 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:09.140 "name": "Existed_Raid", 00:35:09.140 "uuid": "ce0e3a28-bda5-4630-8d4a-a6f24860cc65", 00:35:09.140 "strip_size_kb": 64, 00:35:09.140 "state": "online", 00:35:09.140 "raid_level": "raid0", 00:35:09.140 "superblock": true, 00:35:09.140 "num_base_bdevs": 4, 00:35:09.140 "num_base_bdevs_discovered": 4, 00:35:09.140 "num_base_bdevs_operational": 4, 00:35:09.140 "base_bdevs_list": [ 00:35:09.140 { 00:35:09.140 "name": "BaseBdev1", 00:35:09.140 "uuid": "2f0783d4-21fc-4b14-8a97-286eeb63d82f", 00:35:09.140 "is_configured": true, 00:35:09.140 "data_offset": 2048, 00:35:09.140 "data_size": 63488 00:35:09.140 }, 00:35:09.140 { 00:35:09.140 "name": "BaseBdev2", 00:35:09.140 "uuid": "799f2028-ac77-4c19-aa14-9dd8ec5f574e", 00:35:09.140 "is_configured": true, 00:35:09.140 "data_offset": 2048, 00:35:09.140 "data_size": 63488 00:35:09.140 }, 00:35:09.140 { 00:35:09.140 "name": "BaseBdev3", 00:35:09.140 "uuid": "074008da-a4c8-43a4-914a-305b77d9dc10", 00:35:09.140 "is_configured": true, 00:35:09.140 "data_offset": 2048, 00:35:09.140 "data_size": 63488 00:35:09.140 }, 00:35:09.140 { 00:35:09.140 "name": "BaseBdev4", 00:35:09.140 "uuid": "293665ac-9455-47a0-98e6-d152ea371202", 00:35:09.140 "is_configured": true, 00:35:09.140 "data_offset": 2048, 00:35:09.140 "data_size": 63488 00:35:09.140 } 00:35:09.140 ] 00:35:09.140 }' 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:09.140 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.398 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:09.398 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:09.398 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:09.398 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:09.398 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:09.398 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:09.398 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:09.398 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.398 23:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.398 23:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:09.398 [2024-12-09 23:16:49.997712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:09.398 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:09.663 "name": "Existed_Raid", 00:35:09.663 "aliases": [ 00:35:09.663 "ce0e3a28-bda5-4630-8d4a-a6f24860cc65" 00:35:09.663 ], 00:35:09.663 "product_name": "Raid Volume", 00:35:09.663 "block_size": 512, 00:35:09.663 "num_blocks": 253952, 00:35:09.663 "uuid": "ce0e3a28-bda5-4630-8d4a-a6f24860cc65", 00:35:09.663 "assigned_rate_limits": { 00:35:09.663 "rw_ios_per_sec": 0, 00:35:09.663 "rw_mbytes_per_sec": 0, 00:35:09.663 "r_mbytes_per_sec": 0, 00:35:09.663 "w_mbytes_per_sec": 0 00:35:09.663 }, 00:35:09.663 "claimed": false, 00:35:09.663 "zoned": false, 00:35:09.663 "supported_io_types": { 00:35:09.663 "read": true, 00:35:09.663 "write": true, 00:35:09.663 "unmap": true, 00:35:09.663 "flush": true, 00:35:09.663 "reset": true, 00:35:09.663 "nvme_admin": false, 00:35:09.663 "nvme_io": false, 00:35:09.663 "nvme_io_md": false, 00:35:09.663 "write_zeroes": true, 00:35:09.663 "zcopy": false, 00:35:09.663 "get_zone_info": false, 00:35:09.663 "zone_management": false, 00:35:09.663 "zone_append": false, 00:35:09.663 "compare": false, 00:35:09.663 "compare_and_write": false, 00:35:09.663 "abort": false, 00:35:09.663 "seek_hole": false, 00:35:09.663 "seek_data": false, 00:35:09.663 "copy": false, 00:35:09.663 "nvme_iov_md": false 00:35:09.663 }, 00:35:09.663 "memory_domains": [ 00:35:09.663 { 00:35:09.663 "dma_device_id": "system", 00:35:09.663 "dma_device_type": 1 00:35:09.663 }, 00:35:09.663 { 00:35:09.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:09.663 "dma_device_type": 2 00:35:09.663 }, 00:35:09.663 { 00:35:09.663 "dma_device_id": "system", 00:35:09.663 "dma_device_type": 1 00:35:09.663 }, 00:35:09.663 { 00:35:09.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:09.663 "dma_device_type": 2 00:35:09.663 }, 00:35:09.663 { 00:35:09.663 "dma_device_id": "system", 00:35:09.663 "dma_device_type": 1 00:35:09.663 }, 00:35:09.663 { 00:35:09.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:09.663 "dma_device_type": 2 00:35:09.663 }, 00:35:09.663 { 00:35:09.663 "dma_device_id": "system", 00:35:09.663 "dma_device_type": 1 00:35:09.663 }, 00:35:09.663 { 00:35:09.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:09.663 "dma_device_type": 2 00:35:09.663 } 00:35:09.663 ], 00:35:09.663 "driver_specific": { 00:35:09.663 "raid": { 00:35:09.663 "uuid": "ce0e3a28-bda5-4630-8d4a-a6f24860cc65", 00:35:09.663 "strip_size_kb": 64, 00:35:09.663 "state": "online", 00:35:09.663 "raid_level": "raid0", 00:35:09.663 "superblock": true, 00:35:09.663 "num_base_bdevs": 4, 00:35:09.663 "num_base_bdevs_discovered": 4, 00:35:09.663 "num_base_bdevs_operational": 4, 00:35:09.663 "base_bdevs_list": [ 00:35:09.663 { 00:35:09.663 "name": "BaseBdev1", 00:35:09.663 "uuid": "2f0783d4-21fc-4b14-8a97-286eeb63d82f", 00:35:09.663 "is_configured": true, 00:35:09.663 "data_offset": 2048, 00:35:09.663 "data_size": 63488 00:35:09.663 }, 00:35:09.663 { 00:35:09.663 "name": "BaseBdev2", 00:35:09.663 "uuid": "799f2028-ac77-4c19-aa14-9dd8ec5f574e", 00:35:09.663 "is_configured": true, 00:35:09.663 "data_offset": 2048, 00:35:09.663 "data_size": 63488 00:35:09.663 }, 00:35:09.663 { 00:35:09.663 "name": "BaseBdev3", 00:35:09.663 "uuid": "074008da-a4c8-43a4-914a-305b77d9dc10", 00:35:09.663 "is_configured": true, 00:35:09.663 "data_offset": 2048, 00:35:09.663 "data_size": 63488 00:35:09.663 }, 00:35:09.663 { 00:35:09.663 "name": "BaseBdev4", 00:35:09.663 "uuid": "293665ac-9455-47a0-98e6-d152ea371202", 00:35:09.663 "is_configured": true, 00:35:09.663 "data_offset": 2048, 00:35:09.663 "data_size": 63488 00:35:09.663 } 00:35:09.663 ] 00:35:09.663 } 00:35:09.663 } 00:35:09.663 }' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:09.663 BaseBdev2 00:35:09.663 BaseBdev3 00:35:09.663 BaseBdev4' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.663 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.921 [2024-12-09 23:16:50.297014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:09.921 [2024-12-09 23:16:50.297048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:09.921 [2024-12-09 23:16:50.297102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.921 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:09.921 "name": "Existed_Raid", 00:35:09.921 "uuid": "ce0e3a28-bda5-4630-8d4a-a6f24860cc65", 00:35:09.921 "strip_size_kb": 64, 00:35:09.921 "state": "offline", 00:35:09.921 "raid_level": "raid0", 00:35:09.921 "superblock": true, 00:35:09.921 "num_base_bdevs": 4, 00:35:09.921 "num_base_bdevs_discovered": 3, 00:35:09.921 "num_base_bdevs_operational": 3, 00:35:09.921 "base_bdevs_list": [ 00:35:09.921 { 00:35:09.921 "name": null, 00:35:09.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.921 "is_configured": false, 00:35:09.922 "data_offset": 0, 00:35:09.922 "data_size": 63488 00:35:09.922 }, 00:35:09.922 { 00:35:09.922 "name": "BaseBdev2", 00:35:09.922 "uuid": "799f2028-ac77-4c19-aa14-9dd8ec5f574e", 00:35:09.922 "is_configured": true, 00:35:09.922 "data_offset": 2048, 00:35:09.922 "data_size": 63488 00:35:09.922 }, 00:35:09.922 { 00:35:09.922 "name": "BaseBdev3", 00:35:09.922 "uuid": "074008da-a4c8-43a4-914a-305b77d9dc10", 00:35:09.922 "is_configured": true, 00:35:09.922 "data_offset": 2048, 00:35:09.922 "data_size": 63488 00:35:09.922 }, 00:35:09.922 { 00:35:09.922 "name": "BaseBdev4", 00:35:09.922 "uuid": "293665ac-9455-47a0-98e6-d152ea371202", 00:35:09.922 "is_configured": true, 00:35:09.922 "data_offset": 2048, 00:35:09.922 "data_size": 63488 00:35:09.922 } 00:35:09.922 ] 00:35:09.922 }' 00:35:09.922 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:09.922 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.519 [2024-12-09 23:16:50.889589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:10.519 23:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.519 [2024-12-09 23:16:51.037868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.519 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.777 [2024-12-09 23:16:51.186967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:10.777 [2024-12-09 23:16:51.187155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.777 BaseBdev2 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.777 [ 00:35:10.777 { 00:35:10.777 "name": "BaseBdev2", 00:35:10.777 "aliases": [ 00:35:10.777 "f07e1bc1-3df7-4754-ac03-62996e92aa13" 00:35:10.777 ], 00:35:10.777 "product_name": "Malloc disk", 00:35:10.777 "block_size": 512, 00:35:10.777 "num_blocks": 65536, 00:35:10.777 "uuid": "f07e1bc1-3df7-4754-ac03-62996e92aa13", 00:35:10.777 "assigned_rate_limits": { 00:35:10.777 "rw_ios_per_sec": 0, 00:35:10.777 "rw_mbytes_per_sec": 0, 00:35:10.777 "r_mbytes_per_sec": 0, 00:35:10.777 "w_mbytes_per_sec": 0 00:35:10.777 }, 00:35:10.777 "claimed": false, 00:35:10.777 "zoned": false, 00:35:10.777 "supported_io_types": { 00:35:10.777 "read": true, 00:35:10.777 "write": true, 00:35:10.777 "unmap": true, 00:35:10.777 "flush": true, 00:35:10.777 "reset": true, 00:35:10.777 "nvme_admin": false, 00:35:10.777 "nvme_io": false, 00:35:10.777 "nvme_io_md": false, 00:35:10.777 "write_zeroes": true, 00:35:10.777 "zcopy": true, 00:35:10.777 "get_zone_info": false, 00:35:10.777 "zone_management": false, 00:35:10.777 "zone_append": false, 00:35:10.777 "compare": false, 00:35:10.777 "compare_and_write": false, 00:35:10.777 "abort": true, 00:35:10.777 "seek_hole": false, 00:35:10.777 "seek_data": false, 00:35:10.777 "copy": true, 00:35:10.777 "nvme_iov_md": false 00:35:10.777 }, 00:35:10.777 "memory_domains": [ 00:35:10.777 { 00:35:10.777 "dma_device_id": "system", 00:35:10.777 "dma_device_type": 1 00:35:10.777 }, 00:35:10.777 { 00:35:10.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:10.777 "dma_device_type": 2 00:35:10.777 } 00:35:10.777 ], 00:35:10.777 "driver_specific": {} 00:35:10.777 } 00:35:10.777 ] 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.777 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.035 BaseBdev3 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.035 [ 00:35:11.035 { 00:35:11.035 "name": "BaseBdev3", 00:35:11.035 "aliases": [ 00:35:11.035 "785073a0-259c-480a-ba4b-78d3b1c5e7f2" 00:35:11.035 ], 00:35:11.035 "product_name": "Malloc disk", 00:35:11.035 "block_size": 512, 00:35:11.035 "num_blocks": 65536, 00:35:11.035 "uuid": "785073a0-259c-480a-ba4b-78d3b1c5e7f2", 00:35:11.035 "assigned_rate_limits": { 00:35:11.035 "rw_ios_per_sec": 0, 00:35:11.035 "rw_mbytes_per_sec": 0, 00:35:11.035 "r_mbytes_per_sec": 0, 00:35:11.035 "w_mbytes_per_sec": 0 00:35:11.035 }, 00:35:11.035 "claimed": false, 00:35:11.035 "zoned": false, 00:35:11.035 "supported_io_types": { 00:35:11.035 "read": true, 00:35:11.035 "write": true, 00:35:11.035 "unmap": true, 00:35:11.035 "flush": true, 00:35:11.035 "reset": true, 00:35:11.035 "nvme_admin": false, 00:35:11.035 "nvme_io": false, 00:35:11.035 "nvme_io_md": false, 00:35:11.035 "write_zeroes": true, 00:35:11.035 "zcopy": true, 00:35:11.035 "get_zone_info": false, 00:35:11.035 "zone_management": false, 00:35:11.035 "zone_append": false, 00:35:11.035 "compare": false, 00:35:11.035 "compare_and_write": false, 00:35:11.035 "abort": true, 00:35:11.035 "seek_hole": false, 00:35:11.035 "seek_data": false, 00:35:11.035 "copy": true, 00:35:11.035 "nvme_iov_md": false 00:35:11.035 }, 00:35:11.035 "memory_domains": [ 00:35:11.035 { 00:35:11.035 "dma_device_id": "system", 00:35:11.035 "dma_device_type": 1 00:35:11.035 }, 00:35:11.035 { 00:35:11.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.035 "dma_device_type": 2 00:35:11.035 } 00:35:11.035 ], 00:35:11.035 "driver_specific": {} 00:35:11.035 } 00:35:11.035 ] 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.035 BaseBdev4 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.035 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.035 [ 00:35:11.035 { 00:35:11.035 "name": "BaseBdev4", 00:35:11.035 "aliases": [ 00:35:11.035 "bc3a0d1c-6983-41a1-89dc-6557dd381a64" 00:35:11.035 ], 00:35:11.035 "product_name": "Malloc disk", 00:35:11.035 "block_size": 512, 00:35:11.035 "num_blocks": 65536, 00:35:11.035 "uuid": "bc3a0d1c-6983-41a1-89dc-6557dd381a64", 00:35:11.035 "assigned_rate_limits": { 00:35:11.035 "rw_ios_per_sec": 0, 00:35:11.035 "rw_mbytes_per_sec": 0, 00:35:11.035 "r_mbytes_per_sec": 0, 00:35:11.035 "w_mbytes_per_sec": 0 00:35:11.035 }, 00:35:11.035 "claimed": false, 00:35:11.035 "zoned": false, 00:35:11.035 "supported_io_types": { 00:35:11.035 "read": true, 00:35:11.035 "write": true, 00:35:11.035 "unmap": true, 00:35:11.035 "flush": true, 00:35:11.035 "reset": true, 00:35:11.035 "nvme_admin": false, 00:35:11.035 "nvme_io": false, 00:35:11.035 "nvme_io_md": false, 00:35:11.035 "write_zeroes": true, 00:35:11.035 "zcopy": true, 00:35:11.035 "get_zone_info": false, 00:35:11.035 "zone_management": false, 00:35:11.035 "zone_append": false, 00:35:11.035 "compare": false, 00:35:11.035 "compare_and_write": false, 00:35:11.035 "abort": true, 00:35:11.035 "seek_hole": false, 00:35:11.035 "seek_data": false, 00:35:11.035 "copy": true, 00:35:11.035 "nvme_iov_md": false 00:35:11.035 }, 00:35:11.035 "memory_domains": [ 00:35:11.035 { 00:35:11.035 "dma_device_id": "system", 00:35:11.035 "dma_device_type": 1 00:35:11.035 }, 00:35:11.035 { 00:35:11.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.035 "dma_device_type": 2 00:35:11.035 } 00:35:11.035 ], 00:35:11.035 "driver_specific": {} 00:35:11.035 } 00:35:11.035 ] 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.036 [2024-12-09 23:16:51.555180] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:11.036 [2024-12-09 23:16:51.555331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:11.036 [2024-12-09 23:16:51.555481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:11.036 [2024-12-09 23:16:51.557770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:11.036 [2024-12-09 23:16:51.557942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:11.036 "name": "Existed_Raid", 00:35:11.036 "uuid": "3c0de3b1-c036-4b50-bf78-bb86360b7ce0", 00:35:11.036 "strip_size_kb": 64, 00:35:11.036 "state": "configuring", 00:35:11.036 "raid_level": "raid0", 00:35:11.036 "superblock": true, 00:35:11.036 "num_base_bdevs": 4, 00:35:11.036 "num_base_bdevs_discovered": 3, 00:35:11.036 "num_base_bdevs_operational": 4, 00:35:11.036 "base_bdevs_list": [ 00:35:11.036 { 00:35:11.036 "name": "BaseBdev1", 00:35:11.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:11.036 "is_configured": false, 00:35:11.036 "data_offset": 0, 00:35:11.036 "data_size": 0 00:35:11.036 }, 00:35:11.036 { 00:35:11.036 "name": "BaseBdev2", 00:35:11.036 "uuid": "f07e1bc1-3df7-4754-ac03-62996e92aa13", 00:35:11.036 "is_configured": true, 00:35:11.036 "data_offset": 2048, 00:35:11.036 "data_size": 63488 00:35:11.036 }, 00:35:11.036 { 00:35:11.036 "name": "BaseBdev3", 00:35:11.036 "uuid": "785073a0-259c-480a-ba4b-78d3b1c5e7f2", 00:35:11.036 "is_configured": true, 00:35:11.036 "data_offset": 2048, 00:35:11.036 "data_size": 63488 00:35:11.036 }, 00:35:11.036 { 00:35:11.036 "name": "BaseBdev4", 00:35:11.036 "uuid": "bc3a0d1c-6983-41a1-89dc-6557dd381a64", 00:35:11.036 "is_configured": true, 00:35:11.036 "data_offset": 2048, 00:35:11.036 "data_size": 63488 00:35:11.036 } 00:35:11.036 ] 00:35:11.036 }' 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:11.036 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.602 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:11.602 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.602 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.602 [2024-12-09 23:16:51.966625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:11.602 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.603 23:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.603 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.603 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:11.603 "name": "Existed_Raid", 00:35:11.603 "uuid": "3c0de3b1-c036-4b50-bf78-bb86360b7ce0", 00:35:11.603 "strip_size_kb": 64, 00:35:11.603 "state": "configuring", 00:35:11.603 "raid_level": "raid0", 00:35:11.603 "superblock": true, 00:35:11.603 "num_base_bdevs": 4, 00:35:11.603 "num_base_bdevs_discovered": 2, 00:35:11.603 "num_base_bdevs_operational": 4, 00:35:11.603 "base_bdevs_list": [ 00:35:11.603 { 00:35:11.603 "name": "BaseBdev1", 00:35:11.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:11.603 "is_configured": false, 00:35:11.603 "data_offset": 0, 00:35:11.603 "data_size": 0 00:35:11.603 }, 00:35:11.603 { 00:35:11.603 "name": null, 00:35:11.603 "uuid": "f07e1bc1-3df7-4754-ac03-62996e92aa13", 00:35:11.603 "is_configured": false, 00:35:11.603 "data_offset": 0, 00:35:11.603 "data_size": 63488 00:35:11.603 }, 00:35:11.603 { 00:35:11.603 "name": "BaseBdev3", 00:35:11.603 "uuid": "785073a0-259c-480a-ba4b-78d3b1c5e7f2", 00:35:11.603 "is_configured": true, 00:35:11.603 "data_offset": 2048, 00:35:11.603 "data_size": 63488 00:35:11.603 }, 00:35:11.603 { 00:35:11.603 "name": "BaseBdev4", 00:35:11.603 "uuid": "bc3a0d1c-6983-41a1-89dc-6557dd381a64", 00:35:11.603 "is_configured": true, 00:35:11.603 "data_offset": 2048, 00:35:11.603 "data_size": 63488 00:35:11.603 } 00:35:11.603 ] 00:35:11.603 }' 00:35:11.603 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:11.603 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.862 [2024-12-09 23:16:52.420682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:11.862 BaseBdev1 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.862 [ 00:35:11.862 { 00:35:11.862 "name": "BaseBdev1", 00:35:11.862 "aliases": [ 00:35:11.862 "b3ac0e56-da61-4592-818f-9bf50122a21b" 00:35:11.862 ], 00:35:11.862 "product_name": "Malloc disk", 00:35:11.862 "block_size": 512, 00:35:11.862 "num_blocks": 65536, 00:35:11.862 "uuid": "b3ac0e56-da61-4592-818f-9bf50122a21b", 00:35:11.862 "assigned_rate_limits": { 00:35:11.862 "rw_ios_per_sec": 0, 00:35:11.862 "rw_mbytes_per_sec": 0, 00:35:11.862 "r_mbytes_per_sec": 0, 00:35:11.862 "w_mbytes_per_sec": 0 00:35:11.862 }, 00:35:11.862 "claimed": true, 00:35:11.862 "claim_type": "exclusive_write", 00:35:11.862 "zoned": false, 00:35:11.862 "supported_io_types": { 00:35:11.862 "read": true, 00:35:11.862 "write": true, 00:35:11.862 "unmap": true, 00:35:11.862 "flush": true, 00:35:11.862 "reset": true, 00:35:11.862 "nvme_admin": false, 00:35:11.862 "nvme_io": false, 00:35:11.862 "nvme_io_md": false, 00:35:11.862 "write_zeroes": true, 00:35:11.862 "zcopy": true, 00:35:11.862 "get_zone_info": false, 00:35:11.862 "zone_management": false, 00:35:11.862 "zone_append": false, 00:35:11.862 "compare": false, 00:35:11.862 "compare_and_write": false, 00:35:11.862 "abort": true, 00:35:11.862 "seek_hole": false, 00:35:11.862 "seek_data": false, 00:35:11.862 "copy": true, 00:35:11.862 "nvme_iov_md": false 00:35:11.862 }, 00:35:11.862 "memory_domains": [ 00:35:11.862 { 00:35:11.862 "dma_device_id": "system", 00:35:11.862 "dma_device_type": 1 00:35:11.862 }, 00:35:11.862 { 00:35:11.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.862 "dma_device_type": 2 00:35:11.862 } 00:35:11.862 ], 00:35:11.862 "driver_specific": {} 00:35:11.862 } 00:35:11.862 ] 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.862 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.863 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.122 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:12.122 "name": "Existed_Raid", 00:35:12.122 "uuid": "3c0de3b1-c036-4b50-bf78-bb86360b7ce0", 00:35:12.122 "strip_size_kb": 64, 00:35:12.122 "state": "configuring", 00:35:12.122 "raid_level": "raid0", 00:35:12.122 "superblock": true, 00:35:12.122 "num_base_bdevs": 4, 00:35:12.122 "num_base_bdevs_discovered": 3, 00:35:12.122 "num_base_bdevs_operational": 4, 00:35:12.122 "base_bdevs_list": [ 00:35:12.122 { 00:35:12.122 "name": "BaseBdev1", 00:35:12.122 "uuid": "b3ac0e56-da61-4592-818f-9bf50122a21b", 00:35:12.122 "is_configured": true, 00:35:12.122 "data_offset": 2048, 00:35:12.122 "data_size": 63488 00:35:12.122 }, 00:35:12.122 { 00:35:12.122 "name": null, 00:35:12.122 "uuid": "f07e1bc1-3df7-4754-ac03-62996e92aa13", 00:35:12.122 "is_configured": false, 00:35:12.122 "data_offset": 0, 00:35:12.122 "data_size": 63488 00:35:12.122 }, 00:35:12.122 { 00:35:12.122 "name": "BaseBdev3", 00:35:12.122 "uuid": "785073a0-259c-480a-ba4b-78d3b1c5e7f2", 00:35:12.122 "is_configured": true, 00:35:12.122 "data_offset": 2048, 00:35:12.122 "data_size": 63488 00:35:12.122 }, 00:35:12.122 { 00:35:12.122 "name": "BaseBdev4", 00:35:12.122 "uuid": "bc3a0d1c-6983-41a1-89dc-6557dd381a64", 00:35:12.122 "is_configured": true, 00:35:12.122 "data_offset": 2048, 00:35:12.122 "data_size": 63488 00:35:12.122 } 00:35:12.122 ] 00:35:12.122 }' 00:35:12.122 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:12.122 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.380 [2024-12-09 23:16:52.912278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:12.380 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:12.381 "name": "Existed_Raid", 00:35:12.381 "uuid": "3c0de3b1-c036-4b50-bf78-bb86360b7ce0", 00:35:12.381 "strip_size_kb": 64, 00:35:12.381 "state": "configuring", 00:35:12.381 "raid_level": "raid0", 00:35:12.381 "superblock": true, 00:35:12.381 "num_base_bdevs": 4, 00:35:12.381 "num_base_bdevs_discovered": 2, 00:35:12.381 "num_base_bdevs_operational": 4, 00:35:12.381 "base_bdevs_list": [ 00:35:12.381 { 00:35:12.381 "name": "BaseBdev1", 00:35:12.381 "uuid": "b3ac0e56-da61-4592-818f-9bf50122a21b", 00:35:12.381 "is_configured": true, 00:35:12.381 "data_offset": 2048, 00:35:12.381 "data_size": 63488 00:35:12.381 }, 00:35:12.381 { 00:35:12.381 "name": null, 00:35:12.381 "uuid": "f07e1bc1-3df7-4754-ac03-62996e92aa13", 00:35:12.381 "is_configured": false, 00:35:12.381 "data_offset": 0, 00:35:12.381 "data_size": 63488 00:35:12.381 }, 00:35:12.381 { 00:35:12.381 "name": null, 00:35:12.381 "uuid": "785073a0-259c-480a-ba4b-78d3b1c5e7f2", 00:35:12.381 "is_configured": false, 00:35:12.381 "data_offset": 0, 00:35:12.381 "data_size": 63488 00:35:12.381 }, 00:35:12.381 { 00:35:12.381 "name": "BaseBdev4", 00:35:12.381 "uuid": "bc3a0d1c-6983-41a1-89dc-6557dd381a64", 00:35:12.381 "is_configured": true, 00:35:12.381 "data_offset": 2048, 00:35:12.381 "data_size": 63488 00:35:12.381 } 00:35:12.381 ] 00:35:12.381 }' 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:12.381 23:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.947 [2024-12-09 23:16:53.411635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:12.947 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.948 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.948 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:12.948 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.948 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.948 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:12.948 "name": "Existed_Raid", 00:35:12.948 "uuid": "3c0de3b1-c036-4b50-bf78-bb86360b7ce0", 00:35:12.948 "strip_size_kb": 64, 00:35:12.948 "state": "configuring", 00:35:12.948 "raid_level": "raid0", 00:35:12.948 "superblock": true, 00:35:12.948 "num_base_bdevs": 4, 00:35:12.948 "num_base_bdevs_discovered": 3, 00:35:12.948 "num_base_bdevs_operational": 4, 00:35:12.948 "base_bdevs_list": [ 00:35:12.948 { 00:35:12.948 "name": "BaseBdev1", 00:35:12.948 "uuid": "b3ac0e56-da61-4592-818f-9bf50122a21b", 00:35:12.948 "is_configured": true, 00:35:12.948 "data_offset": 2048, 00:35:12.948 "data_size": 63488 00:35:12.948 }, 00:35:12.948 { 00:35:12.948 "name": null, 00:35:12.948 "uuid": "f07e1bc1-3df7-4754-ac03-62996e92aa13", 00:35:12.948 "is_configured": false, 00:35:12.948 "data_offset": 0, 00:35:12.948 "data_size": 63488 00:35:12.948 }, 00:35:12.948 { 00:35:12.948 "name": "BaseBdev3", 00:35:12.948 "uuid": "785073a0-259c-480a-ba4b-78d3b1c5e7f2", 00:35:12.948 "is_configured": true, 00:35:12.948 "data_offset": 2048, 00:35:12.948 "data_size": 63488 00:35:12.948 }, 00:35:12.948 { 00:35:12.948 "name": "BaseBdev4", 00:35:12.948 "uuid": "bc3a0d1c-6983-41a1-89dc-6557dd381a64", 00:35:12.948 "is_configured": true, 00:35:12.948 "data_offset": 2048, 00:35:12.948 "data_size": 63488 00:35:12.948 } 00:35:12.948 ] 00:35:12.948 }' 00:35:12.948 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:12.948 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.206 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.206 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.206 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.206 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:13.206 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.465 [2024-12-09 23:16:53.859017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.465 23:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.465 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:13.465 "name": "Existed_Raid", 00:35:13.465 "uuid": "3c0de3b1-c036-4b50-bf78-bb86360b7ce0", 00:35:13.465 "strip_size_kb": 64, 00:35:13.465 "state": "configuring", 00:35:13.465 "raid_level": "raid0", 00:35:13.465 "superblock": true, 00:35:13.465 "num_base_bdevs": 4, 00:35:13.465 "num_base_bdevs_discovered": 2, 00:35:13.465 "num_base_bdevs_operational": 4, 00:35:13.465 "base_bdevs_list": [ 00:35:13.465 { 00:35:13.465 "name": null, 00:35:13.465 "uuid": "b3ac0e56-da61-4592-818f-9bf50122a21b", 00:35:13.465 "is_configured": false, 00:35:13.465 "data_offset": 0, 00:35:13.465 "data_size": 63488 00:35:13.465 }, 00:35:13.465 { 00:35:13.465 "name": null, 00:35:13.465 "uuid": "f07e1bc1-3df7-4754-ac03-62996e92aa13", 00:35:13.465 "is_configured": false, 00:35:13.465 "data_offset": 0, 00:35:13.465 "data_size": 63488 00:35:13.465 }, 00:35:13.465 { 00:35:13.465 "name": "BaseBdev3", 00:35:13.465 "uuid": "785073a0-259c-480a-ba4b-78d3b1c5e7f2", 00:35:13.465 "is_configured": true, 00:35:13.465 "data_offset": 2048, 00:35:13.465 "data_size": 63488 00:35:13.465 }, 00:35:13.465 { 00:35:13.465 "name": "BaseBdev4", 00:35:13.465 "uuid": "bc3a0d1c-6983-41a1-89dc-6557dd381a64", 00:35:13.465 "is_configured": true, 00:35:13.465 "data_offset": 2048, 00:35:13.465 "data_size": 63488 00:35:13.465 } 00:35:13.465 ] 00:35:13.465 }' 00:35:13.465 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:13.465 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.033 [2024-12-09 23:16:54.436307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:14.033 "name": "Existed_Raid", 00:35:14.033 "uuid": "3c0de3b1-c036-4b50-bf78-bb86360b7ce0", 00:35:14.033 "strip_size_kb": 64, 00:35:14.033 "state": "configuring", 00:35:14.033 "raid_level": "raid0", 00:35:14.033 "superblock": true, 00:35:14.033 "num_base_bdevs": 4, 00:35:14.033 "num_base_bdevs_discovered": 3, 00:35:14.033 "num_base_bdevs_operational": 4, 00:35:14.033 "base_bdevs_list": [ 00:35:14.033 { 00:35:14.033 "name": null, 00:35:14.033 "uuid": "b3ac0e56-da61-4592-818f-9bf50122a21b", 00:35:14.033 "is_configured": false, 00:35:14.033 "data_offset": 0, 00:35:14.033 "data_size": 63488 00:35:14.033 }, 00:35:14.033 { 00:35:14.033 "name": "BaseBdev2", 00:35:14.033 "uuid": "f07e1bc1-3df7-4754-ac03-62996e92aa13", 00:35:14.033 "is_configured": true, 00:35:14.033 "data_offset": 2048, 00:35:14.033 "data_size": 63488 00:35:14.033 }, 00:35:14.033 { 00:35:14.033 "name": "BaseBdev3", 00:35:14.033 "uuid": "785073a0-259c-480a-ba4b-78d3b1c5e7f2", 00:35:14.033 "is_configured": true, 00:35:14.033 "data_offset": 2048, 00:35:14.033 "data_size": 63488 00:35:14.033 }, 00:35:14.033 { 00:35:14.033 "name": "BaseBdev4", 00:35:14.033 "uuid": "bc3a0d1c-6983-41a1-89dc-6557dd381a64", 00:35:14.033 "is_configured": true, 00:35:14.033 "data_offset": 2048, 00:35:14.033 "data_size": 63488 00:35:14.033 } 00:35:14.033 ] 00:35:14.033 }' 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:14.033 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.291 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:14.291 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.291 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.291 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.291 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.291 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:35:14.291 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.291 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:14.291 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.291 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b3ac0e56-da61-4592-818f-9bf50122a21b 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.549 [2024-12-09 23:16:54.986742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:14.549 [2024-12-09 23:16:54.986986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:14.549 [2024-12-09 23:16:54.987002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:14.549 [2024-12-09 23:16:54.987290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:14.549 [2024-12-09 23:16:54.987462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:14.549 [2024-12-09 23:16:54.987477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:35:14.549 [2024-12-09 23:16:54.987619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:14.549 NewBaseBdev 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.549 23:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.549 [ 00:35:14.549 { 00:35:14.549 "name": "NewBaseBdev", 00:35:14.549 "aliases": [ 00:35:14.549 "b3ac0e56-da61-4592-818f-9bf50122a21b" 00:35:14.549 ], 00:35:14.549 "product_name": "Malloc disk", 00:35:14.549 "block_size": 512, 00:35:14.549 "num_blocks": 65536, 00:35:14.549 "uuid": "b3ac0e56-da61-4592-818f-9bf50122a21b", 00:35:14.549 "assigned_rate_limits": { 00:35:14.549 "rw_ios_per_sec": 0, 00:35:14.549 "rw_mbytes_per_sec": 0, 00:35:14.549 "r_mbytes_per_sec": 0, 00:35:14.549 "w_mbytes_per_sec": 0 00:35:14.549 }, 00:35:14.549 "claimed": true, 00:35:14.549 "claim_type": "exclusive_write", 00:35:14.549 "zoned": false, 00:35:14.549 "supported_io_types": { 00:35:14.549 "read": true, 00:35:14.549 "write": true, 00:35:14.549 "unmap": true, 00:35:14.549 "flush": true, 00:35:14.549 "reset": true, 00:35:14.549 "nvme_admin": false, 00:35:14.549 "nvme_io": false, 00:35:14.549 "nvme_io_md": false, 00:35:14.549 "write_zeroes": true, 00:35:14.549 "zcopy": true, 00:35:14.549 "get_zone_info": false, 00:35:14.549 "zone_management": false, 00:35:14.549 "zone_append": false, 00:35:14.549 "compare": false, 00:35:14.549 "compare_and_write": false, 00:35:14.549 "abort": true, 00:35:14.549 "seek_hole": false, 00:35:14.549 "seek_data": false, 00:35:14.549 "copy": true, 00:35:14.549 "nvme_iov_md": false 00:35:14.549 }, 00:35:14.549 "memory_domains": [ 00:35:14.549 { 00:35:14.549 "dma_device_id": "system", 00:35:14.549 "dma_device_type": 1 00:35:14.549 }, 00:35:14.549 { 00:35:14.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:14.549 "dma_device_type": 2 00:35:14.549 } 00:35:14.549 ], 00:35:14.549 "driver_specific": {} 00:35:14.549 } 00:35:14.549 ] 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:14.549 "name": "Existed_Raid", 00:35:14.549 "uuid": "3c0de3b1-c036-4b50-bf78-bb86360b7ce0", 00:35:14.549 "strip_size_kb": 64, 00:35:14.549 "state": "online", 00:35:14.549 "raid_level": "raid0", 00:35:14.549 "superblock": true, 00:35:14.549 "num_base_bdevs": 4, 00:35:14.549 "num_base_bdevs_discovered": 4, 00:35:14.549 "num_base_bdevs_operational": 4, 00:35:14.549 "base_bdevs_list": [ 00:35:14.549 { 00:35:14.549 "name": "NewBaseBdev", 00:35:14.549 "uuid": "b3ac0e56-da61-4592-818f-9bf50122a21b", 00:35:14.549 "is_configured": true, 00:35:14.549 "data_offset": 2048, 00:35:14.549 "data_size": 63488 00:35:14.549 }, 00:35:14.549 { 00:35:14.549 "name": "BaseBdev2", 00:35:14.549 "uuid": "f07e1bc1-3df7-4754-ac03-62996e92aa13", 00:35:14.549 "is_configured": true, 00:35:14.549 "data_offset": 2048, 00:35:14.549 "data_size": 63488 00:35:14.549 }, 00:35:14.549 { 00:35:14.549 "name": "BaseBdev3", 00:35:14.549 "uuid": "785073a0-259c-480a-ba4b-78d3b1c5e7f2", 00:35:14.549 "is_configured": true, 00:35:14.549 "data_offset": 2048, 00:35:14.549 "data_size": 63488 00:35:14.549 }, 00:35:14.549 { 00:35:14.549 "name": "BaseBdev4", 00:35:14.549 "uuid": "bc3a0d1c-6983-41a1-89dc-6557dd381a64", 00:35:14.549 "is_configured": true, 00:35:14.549 "data_offset": 2048, 00:35:14.549 "data_size": 63488 00:35:14.549 } 00:35:14.549 ] 00:35:14.549 }' 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:14.549 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.116 [2024-12-09 23:16:55.482642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:15.116 "name": "Existed_Raid", 00:35:15.116 "aliases": [ 00:35:15.116 "3c0de3b1-c036-4b50-bf78-bb86360b7ce0" 00:35:15.116 ], 00:35:15.116 "product_name": "Raid Volume", 00:35:15.116 "block_size": 512, 00:35:15.116 "num_blocks": 253952, 00:35:15.116 "uuid": "3c0de3b1-c036-4b50-bf78-bb86360b7ce0", 00:35:15.116 "assigned_rate_limits": { 00:35:15.116 "rw_ios_per_sec": 0, 00:35:15.116 "rw_mbytes_per_sec": 0, 00:35:15.116 "r_mbytes_per_sec": 0, 00:35:15.116 "w_mbytes_per_sec": 0 00:35:15.116 }, 00:35:15.116 "claimed": false, 00:35:15.116 "zoned": false, 00:35:15.116 "supported_io_types": { 00:35:15.116 "read": true, 00:35:15.116 "write": true, 00:35:15.116 "unmap": true, 00:35:15.116 "flush": true, 00:35:15.116 "reset": true, 00:35:15.116 "nvme_admin": false, 00:35:15.116 "nvme_io": false, 00:35:15.116 "nvme_io_md": false, 00:35:15.116 "write_zeroes": true, 00:35:15.116 "zcopy": false, 00:35:15.116 "get_zone_info": false, 00:35:15.116 "zone_management": false, 00:35:15.116 "zone_append": false, 00:35:15.116 "compare": false, 00:35:15.116 "compare_and_write": false, 00:35:15.116 "abort": false, 00:35:15.116 "seek_hole": false, 00:35:15.116 "seek_data": false, 00:35:15.116 "copy": false, 00:35:15.116 "nvme_iov_md": false 00:35:15.116 }, 00:35:15.116 "memory_domains": [ 00:35:15.116 { 00:35:15.116 "dma_device_id": "system", 00:35:15.116 "dma_device_type": 1 00:35:15.116 }, 00:35:15.116 { 00:35:15.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:15.116 "dma_device_type": 2 00:35:15.116 }, 00:35:15.116 { 00:35:15.116 "dma_device_id": "system", 00:35:15.116 "dma_device_type": 1 00:35:15.116 }, 00:35:15.116 { 00:35:15.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:15.116 "dma_device_type": 2 00:35:15.116 }, 00:35:15.116 { 00:35:15.116 "dma_device_id": "system", 00:35:15.116 "dma_device_type": 1 00:35:15.116 }, 00:35:15.116 { 00:35:15.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:15.116 "dma_device_type": 2 00:35:15.116 }, 00:35:15.116 { 00:35:15.116 "dma_device_id": "system", 00:35:15.116 "dma_device_type": 1 00:35:15.116 }, 00:35:15.116 { 00:35:15.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:15.116 "dma_device_type": 2 00:35:15.116 } 00:35:15.116 ], 00:35:15.116 "driver_specific": { 00:35:15.116 "raid": { 00:35:15.116 "uuid": "3c0de3b1-c036-4b50-bf78-bb86360b7ce0", 00:35:15.116 "strip_size_kb": 64, 00:35:15.116 "state": "online", 00:35:15.116 "raid_level": "raid0", 00:35:15.116 "superblock": true, 00:35:15.116 "num_base_bdevs": 4, 00:35:15.116 "num_base_bdevs_discovered": 4, 00:35:15.116 "num_base_bdevs_operational": 4, 00:35:15.116 "base_bdevs_list": [ 00:35:15.116 { 00:35:15.116 "name": "NewBaseBdev", 00:35:15.116 "uuid": "b3ac0e56-da61-4592-818f-9bf50122a21b", 00:35:15.116 "is_configured": true, 00:35:15.116 "data_offset": 2048, 00:35:15.116 "data_size": 63488 00:35:15.116 }, 00:35:15.116 { 00:35:15.116 "name": "BaseBdev2", 00:35:15.116 "uuid": "f07e1bc1-3df7-4754-ac03-62996e92aa13", 00:35:15.116 "is_configured": true, 00:35:15.116 "data_offset": 2048, 00:35:15.116 "data_size": 63488 00:35:15.116 }, 00:35:15.116 { 00:35:15.116 "name": "BaseBdev3", 00:35:15.116 "uuid": "785073a0-259c-480a-ba4b-78d3b1c5e7f2", 00:35:15.116 "is_configured": true, 00:35:15.116 "data_offset": 2048, 00:35:15.116 "data_size": 63488 00:35:15.116 }, 00:35:15.116 { 00:35:15.116 "name": "BaseBdev4", 00:35:15.116 "uuid": "bc3a0d1c-6983-41a1-89dc-6557dd381a64", 00:35:15.116 "is_configured": true, 00:35:15.116 "data_offset": 2048, 00:35:15.116 "data_size": 63488 00:35:15.116 } 00:35:15.116 ] 00:35:15.116 } 00:35:15.116 } 00:35:15.116 }' 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:35:15.116 BaseBdev2 00:35:15.116 BaseBdev3 00:35:15.116 BaseBdev4' 00:35:15.116 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.117 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.374 [2024-12-09 23:16:55.757855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:15.374 [2024-12-09 23:16:55.757890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:15.374 [2024-12-09 23:16:55.757974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:15.374 [2024-12-09 23:16:55.758049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:15.374 [2024-12-09 23:16:55.758061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69917 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69917 ']' 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69917 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69917 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69917' 00:35:15.374 killing process with pid 69917 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69917 00:35:15.374 [2024-12-09 23:16:55.808220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:15.374 23:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69917 00:35:15.633 [2024-12-09 23:16:56.221370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:17.008 23:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:35:17.008 00:35:17.008 real 0m11.308s 00:35:17.008 user 0m17.921s 00:35:17.008 sys 0m2.196s 00:35:17.008 23:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.008 ************************************ 00:35:17.008 END TEST raid_state_function_test_sb 00:35:17.008 ************************************ 00:35:17.008 23:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.008 23:16:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:35:17.008 23:16:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:17.008 23:16:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.008 23:16:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:17.008 ************************************ 00:35:17.008 START TEST raid_superblock_test 00:35:17.008 ************************************ 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70587 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70587 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70587 ']' 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.008 23:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.008 [2024-12-09 23:16:57.604269] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:17.008 [2024-12-09 23:16:57.604434] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70587 ] 00:35:17.298 [2024-12-09 23:16:57.788570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.298 [2024-12-09 23:16:57.909189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.575 [2024-12-09 23:16:58.120363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:17.575 [2024-12-09 23:16:58.120441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:17.836 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:17.836 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.837 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.094 malloc1 00:35:18.094 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.094 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:18.094 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.094 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.094 [2024-12-09 23:16:58.485621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:18.094 [2024-12-09 23:16:58.485690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:18.094 [2024-12-09 23:16:58.485719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:18.094 [2024-12-09 23:16:58.485732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:18.094 [2024-12-09 23:16:58.488263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:18.094 [2024-12-09 23:16:58.488304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:18.094 pt1 00:35:18.094 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.094 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.095 malloc2 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.095 [2024-12-09 23:16:58.542718] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:18.095 [2024-12-09 23:16:58.542783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:18.095 [2024-12-09 23:16:58.542813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:18.095 [2024-12-09 23:16:58.542841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:18.095 [2024-12-09 23:16:58.545370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:18.095 [2024-12-09 23:16:58.545427] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:18.095 pt2 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.095 malloc3 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.095 [2024-12-09 23:16:58.611410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:18.095 [2024-12-09 23:16:58.611585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:18.095 [2024-12-09 23:16:58.611717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:18.095 [2024-12-09 23:16:58.611820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:18.095 [2024-12-09 23:16:58.614372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:18.095 [2024-12-09 23:16:58.614522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:18.095 pt3 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.095 malloc4 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.095 [2024-12-09 23:16:58.672455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:18.095 [2024-12-09 23:16:58.672524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:18.095 [2024-12-09 23:16:58.672552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:18.095 [2024-12-09 23:16:58.672564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:18.095 [2024-12-09 23:16:58.675179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:18.095 [2024-12-09 23:16:58.675223] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:18.095 pt4 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.095 [2024-12-09 23:16:58.684478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:18.095 [2024-12-09 23:16:58.686662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:18.095 [2024-12-09 23:16:58.686922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:18.095 [2024-12-09 23:16:58.686984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:18.095 [2024-12-09 23:16:58.687200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:18.095 [2024-12-09 23:16:58.687213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:18.095 [2024-12-09 23:16:58.687538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:18.095 [2024-12-09 23:16:58.687714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:18.095 [2024-12-09 23:16:58.687729] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:18.095 [2024-12-09 23:16:58.687943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.095 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.354 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:18.354 "name": "raid_bdev1", 00:35:18.354 "uuid": "e09c8950-ea49-41f0-873a-3d631addec8a", 00:35:18.354 "strip_size_kb": 64, 00:35:18.354 "state": "online", 00:35:18.354 "raid_level": "raid0", 00:35:18.354 "superblock": true, 00:35:18.354 "num_base_bdevs": 4, 00:35:18.354 "num_base_bdevs_discovered": 4, 00:35:18.354 "num_base_bdevs_operational": 4, 00:35:18.354 "base_bdevs_list": [ 00:35:18.354 { 00:35:18.354 "name": "pt1", 00:35:18.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:18.354 "is_configured": true, 00:35:18.354 "data_offset": 2048, 00:35:18.354 "data_size": 63488 00:35:18.354 }, 00:35:18.354 { 00:35:18.354 "name": "pt2", 00:35:18.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:18.354 "is_configured": true, 00:35:18.354 "data_offset": 2048, 00:35:18.354 "data_size": 63488 00:35:18.354 }, 00:35:18.354 { 00:35:18.354 "name": "pt3", 00:35:18.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:18.354 "is_configured": true, 00:35:18.354 "data_offset": 2048, 00:35:18.354 "data_size": 63488 00:35:18.354 }, 00:35:18.354 { 00:35:18.354 "name": "pt4", 00:35:18.354 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:18.354 "is_configured": true, 00:35:18.354 "data_offset": 2048, 00:35:18.354 "data_size": 63488 00:35:18.354 } 00:35:18.354 ] 00:35:18.354 }' 00:35:18.354 23:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:18.354 23:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.612 [2024-12-09 23:16:59.120144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:18.612 "name": "raid_bdev1", 00:35:18.612 "aliases": [ 00:35:18.612 "e09c8950-ea49-41f0-873a-3d631addec8a" 00:35:18.612 ], 00:35:18.612 "product_name": "Raid Volume", 00:35:18.612 "block_size": 512, 00:35:18.612 "num_blocks": 253952, 00:35:18.612 "uuid": "e09c8950-ea49-41f0-873a-3d631addec8a", 00:35:18.612 "assigned_rate_limits": { 00:35:18.612 "rw_ios_per_sec": 0, 00:35:18.612 "rw_mbytes_per_sec": 0, 00:35:18.612 "r_mbytes_per_sec": 0, 00:35:18.612 "w_mbytes_per_sec": 0 00:35:18.612 }, 00:35:18.612 "claimed": false, 00:35:18.612 "zoned": false, 00:35:18.612 "supported_io_types": { 00:35:18.612 "read": true, 00:35:18.612 "write": true, 00:35:18.612 "unmap": true, 00:35:18.612 "flush": true, 00:35:18.612 "reset": true, 00:35:18.612 "nvme_admin": false, 00:35:18.612 "nvme_io": false, 00:35:18.612 "nvme_io_md": false, 00:35:18.612 "write_zeroes": true, 00:35:18.612 "zcopy": false, 00:35:18.612 "get_zone_info": false, 00:35:18.612 "zone_management": false, 00:35:18.612 "zone_append": false, 00:35:18.612 "compare": false, 00:35:18.612 "compare_and_write": false, 00:35:18.612 "abort": false, 00:35:18.612 "seek_hole": false, 00:35:18.612 "seek_data": false, 00:35:18.612 "copy": false, 00:35:18.612 "nvme_iov_md": false 00:35:18.612 }, 00:35:18.612 "memory_domains": [ 00:35:18.612 { 00:35:18.612 "dma_device_id": "system", 00:35:18.612 "dma_device_type": 1 00:35:18.612 }, 00:35:18.612 { 00:35:18.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:18.612 "dma_device_type": 2 00:35:18.612 }, 00:35:18.612 { 00:35:18.612 "dma_device_id": "system", 00:35:18.612 "dma_device_type": 1 00:35:18.612 }, 00:35:18.612 { 00:35:18.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:18.612 "dma_device_type": 2 00:35:18.612 }, 00:35:18.612 { 00:35:18.612 "dma_device_id": "system", 00:35:18.612 "dma_device_type": 1 00:35:18.612 }, 00:35:18.612 { 00:35:18.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:18.612 "dma_device_type": 2 00:35:18.612 }, 00:35:18.612 { 00:35:18.612 "dma_device_id": "system", 00:35:18.612 "dma_device_type": 1 00:35:18.612 }, 00:35:18.612 { 00:35:18.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:18.612 "dma_device_type": 2 00:35:18.612 } 00:35:18.612 ], 00:35:18.612 "driver_specific": { 00:35:18.612 "raid": { 00:35:18.612 "uuid": "e09c8950-ea49-41f0-873a-3d631addec8a", 00:35:18.612 "strip_size_kb": 64, 00:35:18.612 "state": "online", 00:35:18.612 "raid_level": "raid0", 00:35:18.612 "superblock": true, 00:35:18.612 "num_base_bdevs": 4, 00:35:18.612 "num_base_bdevs_discovered": 4, 00:35:18.612 "num_base_bdevs_operational": 4, 00:35:18.612 "base_bdevs_list": [ 00:35:18.612 { 00:35:18.612 "name": "pt1", 00:35:18.612 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:18.612 "is_configured": true, 00:35:18.612 "data_offset": 2048, 00:35:18.612 "data_size": 63488 00:35:18.612 }, 00:35:18.612 { 00:35:18.612 "name": "pt2", 00:35:18.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:18.612 "is_configured": true, 00:35:18.612 "data_offset": 2048, 00:35:18.612 "data_size": 63488 00:35:18.612 }, 00:35:18.612 { 00:35:18.612 "name": "pt3", 00:35:18.612 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:18.612 "is_configured": true, 00:35:18.612 "data_offset": 2048, 00:35:18.612 "data_size": 63488 00:35:18.612 }, 00:35:18.612 { 00:35:18.612 "name": "pt4", 00:35:18.612 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:18.612 "is_configured": true, 00:35:18.612 "data_offset": 2048, 00:35:18.612 "data_size": 63488 00:35:18.612 } 00:35:18.612 ] 00:35:18.612 } 00:35:18.612 } 00:35:18.612 }' 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:18.612 pt2 00:35:18.612 pt3 00:35:18.612 pt4' 00:35:18.612 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.870 [2024-12-09 23:16:59.423715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e09c8950-ea49-41f0-873a-3d631addec8a 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e09c8950-ea49-41f0-873a-3d631addec8a ']' 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.870 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.870 [2024-12-09 23:16:59.467343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:18.870 [2024-12-09 23:16:59.467386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:18.870 [2024-12-09 23:16:59.467498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:18.870 [2024-12-09 23:16:59.467569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:18.871 [2024-12-09 23:16:59.467587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:18.871 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.871 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:35:18.871 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:18.871 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.871 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.871 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.129 [2024-12-09 23:16:59.599209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:19.129 [2024-12-09 23:16:59.601374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:19.129 [2024-12-09 23:16:59.601443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:35:19.129 [2024-12-09 23:16:59.601478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:35:19.129 [2024-12-09 23:16:59.601534] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:19.129 [2024-12-09 23:16:59.601593] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:19.129 [2024-12-09 23:16:59.601616] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:35:19.129 [2024-12-09 23:16:59.601637] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:35:19.129 [2024-12-09 23:16:59.601654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:19.129 [2024-12-09 23:16:59.601670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:35:19.129 request: 00:35:19.129 { 00:35:19.129 "name": "raid_bdev1", 00:35:19.129 "raid_level": "raid0", 00:35:19.129 "base_bdevs": [ 00:35:19.129 "malloc1", 00:35:19.129 "malloc2", 00:35:19.129 "malloc3", 00:35:19.129 "malloc4" 00:35:19.129 ], 00:35:19.129 "strip_size_kb": 64, 00:35:19.129 "superblock": false, 00:35:19.129 "method": "bdev_raid_create", 00:35:19.129 "req_id": 1 00:35:19.129 } 00:35:19.129 Got JSON-RPC error response 00:35:19.129 response: 00:35:19.129 { 00:35:19.129 "code": -17, 00:35:19.129 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:19.129 } 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:19.129 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.130 [2024-12-09 23:16:59.643109] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:19.130 [2024-12-09 23:16:59.643338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.130 [2024-12-09 23:16:59.643374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:19.130 [2024-12-09 23:16:59.643404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.130 [2024-12-09 23:16:59.645989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.130 [2024-12-09 23:16:59.646037] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:19.130 [2024-12-09 23:16:59.646128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:19.130 [2024-12-09 23:16:59.646195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:19.130 pt1 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:19.130 "name": "raid_bdev1", 00:35:19.130 "uuid": "e09c8950-ea49-41f0-873a-3d631addec8a", 00:35:19.130 "strip_size_kb": 64, 00:35:19.130 "state": "configuring", 00:35:19.130 "raid_level": "raid0", 00:35:19.130 "superblock": true, 00:35:19.130 "num_base_bdevs": 4, 00:35:19.130 "num_base_bdevs_discovered": 1, 00:35:19.130 "num_base_bdevs_operational": 4, 00:35:19.130 "base_bdevs_list": [ 00:35:19.130 { 00:35:19.130 "name": "pt1", 00:35:19.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:19.130 "is_configured": true, 00:35:19.130 "data_offset": 2048, 00:35:19.130 "data_size": 63488 00:35:19.130 }, 00:35:19.130 { 00:35:19.130 "name": null, 00:35:19.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:19.130 "is_configured": false, 00:35:19.130 "data_offset": 2048, 00:35:19.130 "data_size": 63488 00:35:19.130 }, 00:35:19.130 { 00:35:19.130 "name": null, 00:35:19.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:19.130 "is_configured": false, 00:35:19.130 "data_offset": 2048, 00:35:19.130 "data_size": 63488 00:35:19.130 }, 00:35:19.130 { 00:35:19.130 "name": null, 00:35:19.130 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:19.130 "is_configured": false, 00:35:19.130 "data_offset": 2048, 00:35:19.130 "data_size": 63488 00:35:19.130 } 00:35:19.130 ] 00:35:19.130 }' 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:19.130 23:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.695 [2024-12-09 23:17:00.034555] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:19.695 [2024-12-09 23:17:00.034761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.695 [2024-12-09 23:17:00.034795] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:19.695 [2024-12-09 23:17:00.034811] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.695 [2024-12-09 23:17:00.035274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.695 [2024-12-09 23:17:00.035305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:19.695 [2024-12-09 23:17:00.035404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:19.695 [2024-12-09 23:17:00.035433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:19.695 pt2 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.695 [2024-12-09 23:17:00.042585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:19.695 "name": "raid_bdev1", 00:35:19.695 "uuid": "e09c8950-ea49-41f0-873a-3d631addec8a", 00:35:19.695 "strip_size_kb": 64, 00:35:19.695 "state": "configuring", 00:35:19.695 "raid_level": "raid0", 00:35:19.695 "superblock": true, 00:35:19.695 "num_base_bdevs": 4, 00:35:19.695 "num_base_bdevs_discovered": 1, 00:35:19.695 "num_base_bdevs_operational": 4, 00:35:19.695 "base_bdevs_list": [ 00:35:19.695 { 00:35:19.695 "name": "pt1", 00:35:19.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:19.695 "is_configured": true, 00:35:19.695 "data_offset": 2048, 00:35:19.695 "data_size": 63488 00:35:19.695 }, 00:35:19.695 { 00:35:19.695 "name": null, 00:35:19.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:19.695 "is_configured": false, 00:35:19.695 "data_offset": 0, 00:35:19.695 "data_size": 63488 00:35:19.695 }, 00:35:19.695 { 00:35:19.695 "name": null, 00:35:19.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:19.695 "is_configured": false, 00:35:19.695 "data_offset": 2048, 00:35:19.695 "data_size": 63488 00:35:19.695 }, 00:35:19.695 { 00:35:19.695 "name": null, 00:35:19.695 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:19.695 "is_configured": false, 00:35:19.695 "data_offset": 2048, 00:35:19.695 "data_size": 63488 00:35:19.695 } 00:35:19.695 ] 00:35:19.695 }' 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:19.695 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.953 [2024-12-09 23:17:00.474351] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:19.953 [2024-12-09 23:17:00.474439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.953 [2024-12-09 23:17:00.474469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:35:19.953 [2024-12-09 23:17:00.474482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.953 [2024-12-09 23:17:00.474982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.953 [2024-12-09 23:17:00.475018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:19.953 [2024-12-09 23:17:00.475112] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:19.953 [2024-12-09 23:17:00.475137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:19.953 pt2 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.953 [2024-12-09 23:17:00.482299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:19.953 [2024-12-09 23:17:00.482360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.953 [2024-12-09 23:17:00.482385] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:19.953 [2024-12-09 23:17:00.482410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.953 [2024-12-09 23:17:00.482845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.953 [2024-12-09 23:17:00.482878] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:19.953 [2024-12-09 23:17:00.482962] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:19.953 [2024-12-09 23:17:00.482992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:19.953 pt3 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.953 [2024-12-09 23:17:00.490265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:19.953 [2024-12-09 23:17:00.490320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.953 [2024-12-09 23:17:00.490342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:35:19.953 [2024-12-09 23:17:00.490355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.953 [2024-12-09 23:17:00.490814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.953 [2024-12-09 23:17:00.490864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:19.953 [2024-12-09 23:17:00.490942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:19.953 [2024-12-09 23:17:00.490970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:19.953 [2024-12-09 23:17:00.491118] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:19.953 [2024-12-09 23:17:00.491132] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:19.953 [2024-12-09 23:17:00.491446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:19.953 [2024-12-09 23:17:00.491598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:19.953 [2024-12-09 23:17:00.491614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:19.953 [2024-12-09 23:17:00.491752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:19.953 pt4 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.953 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:19.954 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.954 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.954 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.954 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:19.954 "name": "raid_bdev1", 00:35:19.954 "uuid": "e09c8950-ea49-41f0-873a-3d631addec8a", 00:35:19.954 "strip_size_kb": 64, 00:35:19.954 "state": "online", 00:35:19.954 "raid_level": "raid0", 00:35:19.954 "superblock": true, 00:35:19.954 "num_base_bdevs": 4, 00:35:19.954 "num_base_bdevs_discovered": 4, 00:35:19.954 "num_base_bdevs_operational": 4, 00:35:19.954 "base_bdevs_list": [ 00:35:19.954 { 00:35:19.954 "name": "pt1", 00:35:19.954 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:19.954 "is_configured": true, 00:35:19.954 "data_offset": 2048, 00:35:19.954 "data_size": 63488 00:35:19.954 }, 00:35:19.954 { 00:35:19.954 "name": "pt2", 00:35:19.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:19.954 "is_configured": true, 00:35:19.954 "data_offset": 2048, 00:35:19.954 "data_size": 63488 00:35:19.954 }, 00:35:19.954 { 00:35:19.954 "name": "pt3", 00:35:19.954 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:19.954 "is_configured": true, 00:35:19.954 "data_offset": 2048, 00:35:19.954 "data_size": 63488 00:35:19.954 }, 00:35:19.954 { 00:35:19.954 "name": "pt4", 00:35:19.954 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:19.954 "is_configured": true, 00:35:19.954 "data_offset": 2048, 00:35:19.954 "data_size": 63488 00:35:19.954 } 00:35:19.954 ] 00:35:19.954 }' 00:35:19.954 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:19.954 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.519 [2024-12-09 23:17:00.894080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:20.519 "name": "raid_bdev1", 00:35:20.519 "aliases": [ 00:35:20.519 "e09c8950-ea49-41f0-873a-3d631addec8a" 00:35:20.519 ], 00:35:20.519 "product_name": "Raid Volume", 00:35:20.519 "block_size": 512, 00:35:20.519 "num_blocks": 253952, 00:35:20.519 "uuid": "e09c8950-ea49-41f0-873a-3d631addec8a", 00:35:20.519 "assigned_rate_limits": { 00:35:20.519 "rw_ios_per_sec": 0, 00:35:20.519 "rw_mbytes_per_sec": 0, 00:35:20.519 "r_mbytes_per_sec": 0, 00:35:20.519 "w_mbytes_per_sec": 0 00:35:20.519 }, 00:35:20.519 "claimed": false, 00:35:20.519 "zoned": false, 00:35:20.519 "supported_io_types": { 00:35:20.519 "read": true, 00:35:20.519 "write": true, 00:35:20.519 "unmap": true, 00:35:20.519 "flush": true, 00:35:20.519 "reset": true, 00:35:20.519 "nvme_admin": false, 00:35:20.519 "nvme_io": false, 00:35:20.519 "nvme_io_md": false, 00:35:20.519 "write_zeroes": true, 00:35:20.519 "zcopy": false, 00:35:20.519 "get_zone_info": false, 00:35:20.519 "zone_management": false, 00:35:20.519 "zone_append": false, 00:35:20.519 "compare": false, 00:35:20.519 "compare_and_write": false, 00:35:20.519 "abort": false, 00:35:20.519 "seek_hole": false, 00:35:20.519 "seek_data": false, 00:35:20.519 "copy": false, 00:35:20.519 "nvme_iov_md": false 00:35:20.519 }, 00:35:20.519 "memory_domains": [ 00:35:20.519 { 00:35:20.519 "dma_device_id": "system", 00:35:20.519 "dma_device_type": 1 00:35:20.519 }, 00:35:20.519 { 00:35:20.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:20.519 "dma_device_type": 2 00:35:20.519 }, 00:35:20.519 { 00:35:20.519 "dma_device_id": "system", 00:35:20.519 "dma_device_type": 1 00:35:20.519 }, 00:35:20.519 { 00:35:20.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:20.519 "dma_device_type": 2 00:35:20.519 }, 00:35:20.519 { 00:35:20.519 "dma_device_id": "system", 00:35:20.519 "dma_device_type": 1 00:35:20.519 }, 00:35:20.519 { 00:35:20.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:20.519 "dma_device_type": 2 00:35:20.519 }, 00:35:20.519 { 00:35:20.519 "dma_device_id": "system", 00:35:20.519 "dma_device_type": 1 00:35:20.519 }, 00:35:20.519 { 00:35:20.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:20.519 "dma_device_type": 2 00:35:20.519 } 00:35:20.519 ], 00:35:20.519 "driver_specific": { 00:35:20.519 "raid": { 00:35:20.519 "uuid": "e09c8950-ea49-41f0-873a-3d631addec8a", 00:35:20.519 "strip_size_kb": 64, 00:35:20.519 "state": "online", 00:35:20.519 "raid_level": "raid0", 00:35:20.519 "superblock": true, 00:35:20.519 "num_base_bdevs": 4, 00:35:20.519 "num_base_bdevs_discovered": 4, 00:35:20.519 "num_base_bdevs_operational": 4, 00:35:20.519 "base_bdevs_list": [ 00:35:20.519 { 00:35:20.519 "name": "pt1", 00:35:20.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:20.519 "is_configured": true, 00:35:20.519 "data_offset": 2048, 00:35:20.519 "data_size": 63488 00:35:20.519 }, 00:35:20.519 { 00:35:20.519 "name": "pt2", 00:35:20.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:20.519 "is_configured": true, 00:35:20.519 "data_offset": 2048, 00:35:20.519 "data_size": 63488 00:35:20.519 }, 00:35:20.519 { 00:35:20.519 "name": "pt3", 00:35:20.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:20.519 "is_configured": true, 00:35:20.519 "data_offset": 2048, 00:35:20.519 "data_size": 63488 00:35:20.519 }, 00:35:20.519 { 00:35:20.519 "name": "pt4", 00:35:20.519 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:20.519 "is_configured": true, 00:35:20.519 "data_offset": 2048, 00:35:20.519 "data_size": 63488 00:35:20.519 } 00:35:20.519 ] 00:35:20.519 } 00:35:20.519 } 00:35:20.519 }' 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:20.519 pt2 00:35:20.519 pt3 00:35:20.519 pt4' 00:35:20.519 23:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:35:20.519 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:20.520 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.520 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.520 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.520 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:20.520 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:20.520 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:35:20.777 [2024-12-09 23:17:01.209650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e09c8950-ea49-41f0-873a-3d631addec8a '!=' e09c8950-ea49-41f0-873a-3d631addec8a ']' 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70587 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70587 ']' 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70587 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70587 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70587' 00:35:20.777 killing process with pid 70587 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70587 00:35:20.777 23:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70587 00:35:20.777 [2024-12-09 23:17:01.279084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:20.777 [2024-12-09 23:17:01.279193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:20.777 [2024-12-09 23:17:01.279288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:20.777 [2024-12-09 23:17:01.279320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:35:21.352 [2024-12-09 23:17:01.709693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:22.293 23:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:35:22.293 00:35:22.293 real 0m5.438s 00:35:22.293 user 0m7.666s 00:35:22.293 sys 0m1.010s 00:35:22.293 23:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:22.293 23:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.293 ************************************ 00:35:22.293 END TEST raid_superblock_test 00:35:22.293 ************************************ 00:35:22.551 23:17:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:35:22.551 23:17:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:22.551 23:17:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:22.551 23:17:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:22.551 ************************************ 00:35:22.551 START TEST raid_read_error_test 00:35:22.551 ************************************ 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4xOuTBsTZG 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70855 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70855 00:35:22.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70855 ']' 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.551 23:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.551 [2024-12-09 23:17:03.081218] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:22.551 [2024-12-09 23:17:03.081410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70855 ] 00:35:22.810 [2024-12-09 23:17:03.306851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.810 [2024-12-09 23:17:03.439977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.069 [2024-12-09 23:17:03.660420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:23.069 [2024-12-09 23:17:03.660479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.635 BaseBdev1_malloc 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.635 true 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.635 [2024-12-09 23:17:04.076344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:35:23.635 [2024-12-09 23:17:04.076557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.635 [2024-12-09 23:17:04.076676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:23.635 [2024-12-09 23:17:04.076765] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.635 [2024-12-09 23:17:04.079391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.635 [2024-12-09 23:17:04.079589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:23.635 BaseBdev1 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.635 BaseBdev2_malloc 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.635 true 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:35:23.635 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.636 [2024-12-09 23:17:04.139097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:35:23.636 [2024-12-09 23:17:04.139178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.636 [2024-12-09 23:17:04.139199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:23.636 [2024-12-09 23:17:04.139213] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.636 [2024-12-09 23:17:04.141828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.636 [2024-12-09 23:17:04.141891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:23.636 BaseBdev2 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.636 BaseBdev3_malloc 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.636 true 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.636 [2024-12-09 23:17:04.212192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:35:23.636 [2024-12-09 23:17:04.212412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.636 [2024-12-09 23:17:04.212446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:23.636 [2024-12-09 23:17:04.212463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.636 [2024-12-09 23:17:04.215354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.636 [2024-12-09 23:17:04.215529] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:23.636 BaseBdev3 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.636 BaseBdev4_malloc 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.636 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.894 true 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.894 [2024-12-09 23:17:04.279448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:35:23.894 [2024-12-09 23:17:04.279685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.894 [2024-12-09 23:17:04.279792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:23.894 [2024-12-09 23:17:04.279875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.894 [2024-12-09 23:17:04.282700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.894 [2024-12-09 23:17:04.282869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:23.894 BaseBdev4 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.894 [2024-12-09 23:17:04.291655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:23.894 [2024-12-09 23:17:04.294015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:23.894 [2024-12-09 23:17:04.294238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:23.894 [2024-12-09 23:17:04.294484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:23.894 [2024-12-09 23:17:04.294830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:35:23.894 [2024-12-09 23:17:04.294860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:23.894 [2024-12-09 23:17:04.295160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:35:23.894 [2024-12-09 23:17:04.295336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:35:23.894 [2024-12-09 23:17:04.295351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:35:23.894 [2024-12-09 23:17:04.295608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:23.894 "name": "raid_bdev1", 00:35:23.894 "uuid": "50a13f2d-0bea-4e51-94a0-78326398cb6f", 00:35:23.894 "strip_size_kb": 64, 00:35:23.894 "state": "online", 00:35:23.894 "raid_level": "raid0", 00:35:23.894 "superblock": true, 00:35:23.894 "num_base_bdevs": 4, 00:35:23.894 "num_base_bdevs_discovered": 4, 00:35:23.894 "num_base_bdevs_operational": 4, 00:35:23.894 "base_bdevs_list": [ 00:35:23.894 { 00:35:23.894 "name": "BaseBdev1", 00:35:23.894 "uuid": "cad9f6f5-0c78-55ad-b374-2a5d32e19298", 00:35:23.894 "is_configured": true, 00:35:23.894 "data_offset": 2048, 00:35:23.894 "data_size": 63488 00:35:23.894 }, 00:35:23.894 { 00:35:23.894 "name": "BaseBdev2", 00:35:23.894 "uuid": "d9c41534-61ea-5d20-a420-c6ffe5d93347", 00:35:23.894 "is_configured": true, 00:35:23.894 "data_offset": 2048, 00:35:23.894 "data_size": 63488 00:35:23.894 }, 00:35:23.894 { 00:35:23.894 "name": "BaseBdev3", 00:35:23.894 "uuid": "71dd9dd7-cffd-541d-9e08-0042db26bd82", 00:35:23.894 "is_configured": true, 00:35:23.894 "data_offset": 2048, 00:35:23.894 "data_size": 63488 00:35:23.894 }, 00:35:23.894 { 00:35:23.894 "name": "BaseBdev4", 00:35:23.894 "uuid": "7fec900f-8839-5c75-b3f1-d8ce6677d5eb", 00:35:23.894 "is_configured": true, 00:35:23.894 "data_offset": 2048, 00:35:23.894 "data_size": 63488 00:35:23.894 } 00:35:23.894 ] 00:35:23.894 }' 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:23.894 23:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.153 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:35:24.153 23:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:24.412 [2024-12-09 23:17:04.848476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:25.345 "name": "raid_bdev1", 00:35:25.345 "uuid": "50a13f2d-0bea-4e51-94a0-78326398cb6f", 00:35:25.345 "strip_size_kb": 64, 00:35:25.345 "state": "online", 00:35:25.345 "raid_level": "raid0", 00:35:25.345 "superblock": true, 00:35:25.345 "num_base_bdevs": 4, 00:35:25.345 "num_base_bdevs_discovered": 4, 00:35:25.345 "num_base_bdevs_operational": 4, 00:35:25.345 "base_bdevs_list": [ 00:35:25.345 { 00:35:25.345 "name": "BaseBdev1", 00:35:25.345 "uuid": "cad9f6f5-0c78-55ad-b374-2a5d32e19298", 00:35:25.345 "is_configured": true, 00:35:25.345 "data_offset": 2048, 00:35:25.345 "data_size": 63488 00:35:25.345 }, 00:35:25.345 { 00:35:25.345 "name": "BaseBdev2", 00:35:25.345 "uuid": "d9c41534-61ea-5d20-a420-c6ffe5d93347", 00:35:25.345 "is_configured": true, 00:35:25.345 "data_offset": 2048, 00:35:25.345 "data_size": 63488 00:35:25.345 }, 00:35:25.345 { 00:35:25.345 "name": "BaseBdev3", 00:35:25.345 "uuid": "71dd9dd7-cffd-541d-9e08-0042db26bd82", 00:35:25.345 "is_configured": true, 00:35:25.345 "data_offset": 2048, 00:35:25.345 "data_size": 63488 00:35:25.345 }, 00:35:25.345 { 00:35:25.345 "name": "BaseBdev4", 00:35:25.345 "uuid": "7fec900f-8839-5c75-b3f1-d8ce6677d5eb", 00:35:25.345 "is_configured": true, 00:35:25.345 "data_offset": 2048, 00:35:25.345 "data_size": 63488 00:35:25.345 } 00:35:25.345 ] 00:35:25.345 }' 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:25.345 23:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.603 23:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:25.603 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.603 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.603 [2024-12-09 23:17:06.201353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:25.603 [2024-12-09 23:17:06.201392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:25.603 [2024-12-09 23:17:06.204324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:25.603 [2024-12-09 23:17:06.204390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:25.603 [2024-12-09 23:17:06.204447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:25.603 [2024-12-09 23:17:06.204463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:35:25.603 { 00:35:25.603 "results": [ 00:35:25.603 { 00:35:25.603 "job": "raid_bdev1", 00:35:25.603 "core_mask": "0x1", 00:35:25.603 "workload": "randrw", 00:35:25.603 "percentage": 50, 00:35:25.603 "status": "finished", 00:35:25.603 "queue_depth": 1, 00:35:25.603 "io_size": 131072, 00:35:25.603 "runtime": 1.352874, 00:35:25.603 "iops": 14373.104960254983, 00:35:25.603 "mibps": 1796.6381200318729, 00:35:25.603 "io_failed": 1, 00:35:25.603 "io_timeout": 0, 00:35:25.603 "avg_latency_us": 96.22396049279914, 00:35:25.603 "min_latency_us": 27.759036144578314, 00:35:25.603 "max_latency_us": 1592.340562248996 00:35:25.603 } 00:35:25.603 ], 00:35:25.603 "core_count": 1 00:35:25.603 } 00:35:25.603 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.603 23:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70855 00:35:25.603 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70855 ']' 00:35:25.603 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70855 00:35:25.603 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:35:25.603 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.603 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70855 00:35:25.862 killing process with pid 70855 00:35:25.862 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.862 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.862 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70855' 00:35:25.862 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70855 00:35:25.862 [2024-12-09 23:17:06.239416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:25.862 23:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70855 00:35:26.120 [2024-12-09 23:17:06.587654] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:27.497 23:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4xOuTBsTZG 00:35:27.497 23:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:35:27.497 23:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:35:27.497 23:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:35:27.497 23:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:35:27.497 23:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:27.497 23:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:27.497 ************************************ 00:35:27.497 END TEST raid_read_error_test 00:35:27.497 ************************************ 00:35:27.497 23:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:35:27.497 00:35:27.497 real 0m4.885s 00:35:27.497 user 0m5.755s 00:35:27.497 sys 0m0.614s 00:35:27.497 23:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:27.497 23:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.497 23:17:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:35:27.497 23:17:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:27.497 23:17:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:27.497 23:17:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:27.497 ************************************ 00:35:27.497 START TEST raid_write_error_test 00:35:27.497 ************************************ 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Y88lyVlGhL 00:35:27.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70995 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70995 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70995 ']' 00:35:27.497 23:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.498 23:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.498 23:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.498 23:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.498 23:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.498 [2024-12-09 23:17:08.039765] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:27.498 [2024-12-09 23:17:08.040044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70995 ] 00:35:27.756 [2024-12-09 23:17:08.224313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.756 [2024-12-09 23:17:08.347052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.015 [2024-12-09 23:17:08.562220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:28.015 [2024-12-09 23:17:08.562480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:28.330 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:28.330 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:35:28.330 23:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:28.330 23:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:28.330 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.330 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.590 BaseBdev1_malloc 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.590 true 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.590 [2024-12-09 23:17:08.977411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:35:28.590 [2024-12-09 23:17:08.977483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.590 [2024-12-09 23:17:08.977507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:28.590 [2024-12-09 23:17:08.977539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.590 [2024-12-09 23:17:08.980132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.590 [2024-12-09 23:17:08.980181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:28.590 BaseBdev1 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.590 23:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.590 BaseBdev2_malloc 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.590 true 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.590 [2024-12-09 23:17:09.046515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:35:28.590 [2024-12-09 23:17:09.046748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.590 [2024-12-09 23:17:09.046783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:28.590 [2024-12-09 23:17:09.046799] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.590 [2024-12-09 23:17:09.049477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.590 [2024-12-09 23:17:09.049523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:28.590 BaseBdev2 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.590 BaseBdev3_malloc 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.590 true 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.590 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.590 [2024-12-09 23:17:09.128769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:35:28.590 [2024-12-09 23:17:09.128967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.590 [2024-12-09 23:17:09.129034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:28.591 [2024-12-09 23:17:09.129123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.591 [2024-12-09 23:17:09.132039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.591 [2024-12-09 23:17:09.132226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:28.591 BaseBdev3 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.591 BaseBdev4_malloc 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.591 true 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.591 [2024-12-09 23:17:09.194269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:35:28.591 [2024-12-09 23:17:09.194476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.591 [2024-12-09 23:17:09.194536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:28.591 [2024-12-09 23:17:09.194623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.591 [2024-12-09 23:17:09.197164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.591 [2024-12-09 23:17:09.197357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:28.591 BaseBdev4 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.591 [2024-12-09 23:17:09.206421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:28.591 [2024-12-09 23:17:09.208729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:28.591 [2024-12-09 23:17:09.208927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:28.591 [2024-12-09 23:17:09.209035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:28.591 [2024-12-09 23:17:09.209354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:35:28.591 [2024-12-09 23:17:09.209434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:28.591 [2024-12-09 23:17:09.209825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:35:28.591 [2024-12-09 23:17:09.210108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:35:28.591 [2024-12-09 23:17:09.210209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:35:28.591 [2024-12-09 23:17:09.210567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.591 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.849 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.849 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:28.849 "name": "raid_bdev1", 00:35:28.849 "uuid": "b455f874-2dea-4bd7-aaa3-d92918aec80b", 00:35:28.849 "strip_size_kb": 64, 00:35:28.849 "state": "online", 00:35:28.849 "raid_level": "raid0", 00:35:28.849 "superblock": true, 00:35:28.849 "num_base_bdevs": 4, 00:35:28.849 "num_base_bdevs_discovered": 4, 00:35:28.849 "num_base_bdevs_operational": 4, 00:35:28.849 "base_bdevs_list": [ 00:35:28.849 { 00:35:28.849 "name": "BaseBdev1", 00:35:28.849 "uuid": "be05b736-47d4-5f4f-8ddd-be18637282ed", 00:35:28.849 "is_configured": true, 00:35:28.849 "data_offset": 2048, 00:35:28.849 "data_size": 63488 00:35:28.849 }, 00:35:28.849 { 00:35:28.849 "name": "BaseBdev2", 00:35:28.849 "uuid": "80391060-d5ad-5817-bd01-e8badb8f09af", 00:35:28.849 "is_configured": true, 00:35:28.849 "data_offset": 2048, 00:35:28.849 "data_size": 63488 00:35:28.849 }, 00:35:28.849 { 00:35:28.849 "name": "BaseBdev3", 00:35:28.849 "uuid": "b89da8a7-b75c-51cc-83ee-de34ad81ab1d", 00:35:28.849 "is_configured": true, 00:35:28.849 "data_offset": 2048, 00:35:28.849 "data_size": 63488 00:35:28.849 }, 00:35:28.849 { 00:35:28.849 "name": "BaseBdev4", 00:35:28.849 "uuid": "1360c756-f30f-51f5-a0be-e2b39b92002f", 00:35:28.849 "is_configured": true, 00:35:28.849 "data_offset": 2048, 00:35:28.849 "data_size": 63488 00:35:28.849 } 00:35:28.849 ] 00:35:28.849 }' 00:35:28.849 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:28.849 23:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.106 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:35:29.106 23:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:29.106 [2024-12-09 23:17:09.715818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:35:30.039 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.040 23:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.297 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:30.297 "name": "raid_bdev1", 00:35:30.297 "uuid": "b455f874-2dea-4bd7-aaa3-d92918aec80b", 00:35:30.297 "strip_size_kb": 64, 00:35:30.297 "state": "online", 00:35:30.297 "raid_level": "raid0", 00:35:30.297 "superblock": true, 00:35:30.297 "num_base_bdevs": 4, 00:35:30.297 "num_base_bdevs_discovered": 4, 00:35:30.297 "num_base_bdevs_operational": 4, 00:35:30.297 "base_bdevs_list": [ 00:35:30.297 { 00:35:30.297 "name": "BaseBdev1", 00:35:30.297 "uuid": "be05b736-47d4-5f4f-8ddd-be18637282ed", 00:35:30.297 "is_configured": true, 00:35:30.297 "data_offset": 2048, 00:35:30.297 "data_size": 63488 00:35:30.297 }, 00:35:30.297 { 00:35:30.297 "name": "BaseBdev2", 00:35:30.298 "uuid": "80391060-d5ad-5817-bd01-e8badb8f09af", 00:35:30.298 "is_configured": true, 00:35:30.298 "data_offset": 2048, 00:35:30.298 "data_size": 63488 00:35:30.298 }, 00:35:30.298 { 00:35:30.298 "name": "BaseBdev3", 00:35:30.298 "uuid": "b89da8a7-b75c-51cc-83ee-de34ad81ab1d", 00:35:30.298 "is_configured": true, 00:35:30.298 "data_offset": 2048, 00:35:30.298 "data_size": 63488 00:35:30.298 }, 00:35:30.298 { 00:35:30.298 "name": "BaseBdev4", 00:35:30.298 "uuid": "1360c756-f30f-51f5-a0be-e2b39b92002f", 00:35:30.298 "is_configured": true, 00:35:30.298 "data_offset": 2048, 00:35:30.298 "data_size": 63488 00:35:30.298 } 00:35:30.298 ] 00:35:30.298 }' 00:35:30.298 23:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:30.298 23:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.558 [2024-12-09 23:17:11.103320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:30.558 [2024-12-09 23:17:11.103357] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:30.558 [2024-12-09 23:17:11.106031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:30.558 [2024-12-09 23:17:11.106095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:30.558 [2024-12-09 23:17:11.106141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:30.558 [2024-12-09 23:17:11.106155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:35:30.558 { 00:35:30.558 "results": [ 00:35:30.558 { 00:35:30.558 "job": "raid_bdev1", 00:35:30.558 "core_mask": "0x1", 00:35:30.558 "workload": "randrw", 00:35:30.558 "percentage": 50, 00:35:30.558 "status": "finished", 00:35:30.558 "queue_depth": 1, 00:35:30.558 "io_size": 131072, 00:35:30.558 "runtime": 1.387343, 00:35:30.558 "iops": 15483.553814737956, 00:35:30.558 "mibps": 1935.4442268422445, 00:35:30.558 "io_failed": 1, 00:35:30.558 "io_timeout": 0, 00:35:30.558 "avg_latency_us": 89.23459461157171, 00:35:30.558 "min_latency_us": 27.759036144578314, 00:35:30.558 "max_latency_us": 1414.6827309236949 00:35:30.558 } 00:35:30.558 ], 00:35:30.558 "core_count": 1 00:35:30.558 } 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70995 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70995 ']' 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70995 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70995 00:35:30.558 killing process with pid 70995 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70995' 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70995 00:35:30.558 [2024-12-09 23:17:11.151215] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:30.558 23:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70995 00:35:31.124 [2024-12-09 23:17:11.487646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:32.506 23:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Y88lyVlGhL 00:35:32.506 23:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:35:32.506 23:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:35:32.506 23:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:35:32.506 23:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:35:32.506 23:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:32.506 23:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:32.506 23:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:35:32.506 00:35:32.506 real 0m4.827s 00:35:32.506 user 0m5.695s 00:35:32.506 sys 0m0.626s 00:35:32.506 ************************************ 00:35:32.506 END TEST raid_write_error_test 00:35:32.506 ************************************ 00:35:32.506 23:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.506 23:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.506 23:17:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:35:32.506 23:17:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:35:32.506 23:17:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:32.506 23:17:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.506 23:17:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:32.506 ************************************ 00:35:32.506 START TEST raid_state_function_test 00:35:32.506 ************************************ 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:35:32.506 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71144 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71144' 00:35:32.507 Process raid pid: 71144 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71144 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71144 ']' 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.507 23:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.507 [2024-12-09 23:17:12.952346] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:32.507 [2024-12-09 23:17:12.952492] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.507 [2024-12-09 23:17:13.119813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.766 [2024-12-09 23:17:13.241811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.033 [2024-12-09 23:17:13.459012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:33.033 [2024-12-09 23:17:13.459059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.295 [2024-12-09 23:17:13.818014] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:33.295 [2024-12-09 23:17:13.818081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:33.295 [2024-12-09 23:17:13.818094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:33.295 [2024-12-09 23:17:13.818107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:33.295 [2024-12-09 23:17:13.818116] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:33.295 [2024-12-09 23:17:13.818129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:33.295 [2024-12-09 23:17:13.818153] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:33.295 [2024-12-09 23:17:13.818165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:33.295 "name": "Existed_Raid", 00:35:33.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.295 "strip_size_kb": 64, 00:35:33.295 "state": "configuring", 00:35:33.295 "raid_level": "concat", 00:35:33.295 "superblock": false, 00:35:33.295 "num_base_bdevs": 4, 00:35:33.295 "num_base_bdevs_discovered": 0, 00:35:33.295 "num_base_bdevs_operational": 4, 00:35:33.295 "base_bdevs_list": [ 00:35:33.295 { 00:35:33.295 "name": "BaseBdev1", 00:35:33.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.295 "is_configured": false, 00:35:33.295 "data_offset": 0, 00:35:33.295 "data_size": 0 00:35:33.295 }, 00:35:33.295 { 00:35:33.295 "name": "BaseBdev2", 00:35:33.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.295 "is_configured": false, 00:35:33.295 "data_offset": 0, 00:35:33.295 "data_size": 0 00:35:33.295 }, 00:35:33.295 { 00:35:33.295 "name": "BaseBdev3", 00:35:33.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.295 "is_configured": false, 00:35:33.295 "data_offset": 0, 00:35:33.295 "data_size": 0 00:35:33.295 }, 00:35:33.295 { 00:35:33.295 "name": "BaseBdev4", 00:35:33.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.295 "is_configured": false, 00:35:33.295 "data_offset": 0, 00:35:33.295 "data_size": 0 00:35:33.295 } 00:35:33.295 ] 00:35:33.295 }' 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:33.295 23:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.862 [2024-12-09 23:17:14.241429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:33.862 [2024-12-09 23:17:14.241599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.862 [2024-12-09 23:17:14.249409] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:33.862 [2024-12-09 23:17:14.249588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:33.862 [2024-12-09 23:17:14.249611] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:33.862 [2024-12-09 23:17:14.249626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:33.862 [2024-12-09 23:17:14.249635] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:33.862 [2024-12-09 23:17:14.249647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:33.862 [2024-12-09 23:17:14.249655] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:33.862 [2024-12-09 23:17:14.249668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.862 [2024-12-09 23:17:14.297252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:33.862 BaseBdev1 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.862 [ 00:35:33.862 { 00:35:33.862 "name": "BaseBdev1", 00:35:33.862 "aliases": [ 00:35:33.862 "877d5994-013b-4331-98fe-cadb1becc474" 00:35:33.862 ], 00:35:33.862 "product_name": "Malloc disk", 00:35:33.862 "block_size": 512, 00:35:33.862 "num_blocks": 65536, 00:35:33.862 "uuid": "877d5994-013b-4331-98fe-cadb1becc474", 00:35:33.862 "assigned_rate_limits": { 00:35:33.862 "rw_ios_per_sec": 0, 00:35:33.862 "rw_mbytes_per_sec": 0, 00:35:33.862 "r_mbytes_per_sec": 0, 00:35:33.862 "w_mbytes_per_sec": 0 00:35:33.862 }, 00:35:33.862 "claimed": true, 00:35:33.862 "claim_type": "exclusive_write", 00:35:33.862 "zoned": false, 00:35:33.862 "supported_io_types": { 00:35:33.862 "read": true, 00:35:33.862 "write": true, 00:35:33.862 "unmap": true, 00:35:33.862 "flush": true, 00:35:33.862 "reset": true, 00:35:33.862 "nvme_admin": false, 00:35:33.862 "nvme_io": false, 00:35:33.862 "nvme_io_md": false, 00:35:33.862 "write_zeroes": true, 00:35:33.862 "zcopy": true, 00:35:33.862 "get_zone_info": false, 00:35:33.862 "zone_management": false, 00:35:33.862 "zone_append": false, 00:35:33.862 "compare": false, 00:35:33.862 "compare_and_write": false, 00:35:33.862 "abort": true, 00:35:33.862 "seek_hole": false, 00:35:33.862 "seek_data": false, 00:35:33.862 "copy": true, 00:35:33.862 "nvme_iov_md": false 00:35:33.862 }, 00:35:33.862 "memory_domains": [ 00:35:33.862 { 00:35:33.862 "dma_device_id": "system", 00:35:33.862 "dma_device_type": 1 00:35:33.862 }, 00:35:33.862 { 00:35:33.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:33.862 "dma_device_type": 2 00:35:33.862 } 00:35:33.862 ], 00:35:33.862 "driver_specific": {} 00:35:33.862 } 00:35:33.862 ] 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.862 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:33.862 "name": "Existed_Raid", 00:35:33.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.863 "strip_size_kb": 64, 00:35:33.863 "state": "configuring", 00:35:33.863 "raid_level": "concat", 00:35:33.863 "superblock": false, 00:35:33.863 "num_base_bdevs": 4, 00:35:33.863 "num_base_bdevs_discovered": 1, 00:35:33.863 "num_base_bdevs_operational": 4, 00:35:33.863 "base_bdevs_list": [ 00:35:33.863 { 00:35:33.863 "name": "BaseBdev1", 00:35:33.863 "uuid": "877d5994-013b-4331-98fe-cadb1becc474", 00:35:33.863 "is_configured": true, 00:35:33.863 "data_offset": 0, 00:35:33.863 "data_size": 65536 00:35:33.863 }, 00:35:33.863 { 00:35:33.863 "name": "BaseBdev2", 00:35:33.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.863 "is_configured": false, 00:35:33.863 "data_offset": 0, 00:35:33.863 "data_size": 0 00:35:33.863 }, 00:35:33.863 { 00:35:33.863 "name": "BaseBdev3", 00:35:33.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.863 "is_configured": false, 00:35:33.863 "data_offset": 0, 00:35:33.863 "data_size": 0 00:35:33.863 }, 00:35:33.863 { 00:35:33.863 "name": "BaseBdev4", 00:35:33.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.863 "is_configured": false, 00:35:33.863 "data_offset": 0, 00:35:33.863 "data_size": 0 00:35:33.863 } 00:35:33.863 ] 00:35:33.863 }' 00:35:33.863 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:33.863 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.430 [2024-12-09 23:17:14.776645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:34.430 [2024-12-09 23:17:14.776706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.430 [2024-12-09 23:17:14.784714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:34.430 [2024-12-09 23:17:14.787080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:34.430 [2024-12-09 23:17:14.787256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:34.430 [2024-12-09 23:17:14.787362] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:34.430 [2024-12-09 23:17:14.787435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:34.430 [2024-12-09 23:17:14.787517] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:34.430 [2024-12-09 23:17:14.787611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:34.430 "name": "Existed_Raid", 00:35:34.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.430 "strip_size_kb": 64, 00:35:34.430 "state": "configuring", 00:35:34.430 "raid_level": "concat", 00:35:34.430 "superblock": false, 00:35:34.430 "num_base_bdevs": 4, 00:35:34.430 "num_base_bdevs_discovered": 1, 00:35:34.430 "num_base_bdevs_operational": 4, 00:35:34.430 "base_bdevs_list": [ 00:35:34.430 { 00:35:34.430 "name": "BaseBdev1", 00:35:34.430 "uuid": "877d5994-013b-4331-98fe-cadb1becc474", 00:35:34.430 "is_configured": true, 00:35:34.430 "data_offset": 0, 00:35:34.430 "data_size": 65536 00:35:34.430 }, 00:35:34.430 { 00:35:34.430 "name": "BaseBdev2", 00:35:34.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.430 "is_configured": false, 00:35:34.430 "data_offset": 0, 00:35:34.430 "data_size": 0 00:35:34.430 }, 00:35:34.430 { 00:35:34.430 "name": "BaseBdev3", 00:35:34.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.430 "is_configured": false, 00:35:34.430 "data_offset": 0, 00:35:34.430 "data_size": 0 00:35:34.430 }, 00:35:34.430 { 00:35:34.430 "name": "BaseBdev4", 00:35:34.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.430 "is_configured": false, 00:35:34.430 "data_offset": 0, 00:35:34.430 "data_size": 0 00:35:34.430 } 00:35:34.430 ] 00:35:34.430 }' 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:34.430 23:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.688 [2024-12-09 23:17:15.246913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:34.688 BaseBdev2 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.688 [ 00:35:34.688 { 00:35:34.688 "name": "BaseBdev2", 00:35:34.688 "aliases": [ 00:35:34.688 "7070c505-6f40-4da7-8074-355e7bf61efc" 00:35:34.688 ], 00:35:34.688 "product_name": "Malloc disk", 00:35:34.688 "block_size": 512, 00:35:34.688 "num_blocks": 65536, 00:35:34.688 "uuid": "7070c505-6f40-4da7-8074-355e7bf61efc", 00:35:34.688 "assigned_rate_limits": { 00:35:34.688 "rw_ios_per_sec": 0, 00:35:34.688 "rw_mbytes_per_sec": 0, 00:35:34.688 "r_mbytes_per_sec": 0, 00:35:34.688 "w_mbytes_per_sec": 0 00:35:34.688 }, 00:35:34.688 "claimed": true, 00:35:34.688 "claim_type": "exclusive_write", 00:35:34.688 "zoned": false, 00:35:34.688 "supported_io_types": { 00:35:34.688 "read": true, 00:35:34.688 "write": true, 00:35:34.688 "unmap": true, 00:35:34.688 "flush": true, 00:35:34.688 "reset": true, 00:35:34.688 "nvme_admin": false, 00:35:34.688 "nvme_io": false, 00:35:34.688 "nvme_io_md": false, 00:35:34.688 "write_zeroes": true, 00:35:34.688 "zcopy": true, 00:35:34.688 "get_zone_info": false, 00:35:34.688 "zone_management": false, 00:35:34.688 "zone_append": false, 00:35:34.688 "compare": false, 00:35:34.688 "compare_and_write": false, 00:35:34.688 "abort": true, 00:35:34.688 "seek_hole": false, 00:35:34.688 "seek_data": false, 00:35:34.688 "copy": true, 00:35:34.688 "nvme_iov_md": false 00:35:34.688 }, 00:35:34.688 "memory_domains": [ 00:35:34.688 { 00:35:34.688 "dma_device_id": "system", 00:35:34.688 "dma_device_type": 1 00:35:34.688 }, 00:35:34.688 { 00:35:34.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:34.688 "dma_device_type": 2 00:35:34.688 } 00:35:34.688 ], 00:35:34.688 "driver_specific": {} 00:35:34.688 } 00:35:34.688 ] 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:34.688 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:34.689 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:34.689 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:34.689 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:34.689 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:34.689 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:34.689 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.689 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.947 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.947 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:34.947 "name": "Existed_Raid", 00:35:34.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.947 "strip_size_kb": 64, 00:35:34.947 "state": "configuring", 00:35:34.947 "raid_level": "concat", 00:35:34.947 "superblock": false, 00:35:34.947 "num_base_bdevs": 4, 00:35:34.947 "num_base_bdevs_discovered": 2, 00:35:34.947 "num_base_bdevs_operational": 4, 00:35:34.947 "base_bdevs_list": [ 00:35:34.947 { 00:35:34.947 "name": "BaseBdev1", 00:35:34.947 "uuid": "877d5994-013b-4331-98fe-cadb1becc474", 00:35:34.947 "is_configured": true, 00:35:34.947 "data_offset": 0, 00:35:34.947 "data_size": 65536 00:35:34.948 }, 00:35:34.948 { 00:35:34.948 "name": "BaseBdev2", 00:35:34.948 "uuid": "7070c505-6f40-4da7-8074-355e7bf61efc", 00:35:34.948 "is_configured": true, 00:35:34.948 "data_offset": 0, 00:35:34.948 "data_size": 65536 00:35:34.948 }, 00:35:34.948 { 00:35:34.948 "name": "BaseBdev3", 00:35:34.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.948 "is_configured": false, 00:35:34.948 "data_offset": 0, 00:35:34.948 "data_size": 0 00:35:34.948 }, 00:35:34.948 { 00:35:34.948 "name": "BaseBdev4", 00:35:34.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.948 "is_configured": false, 00:35:34.948 "data_offset": 0, 00:35:34.948 "data_size": 0 00:35:34.948 } 00:35:34.948 ] 00:35:34.948 }' 00:35:34.948 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:34.948 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.205 [2024-12-09 23:17:15.789050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:35.205 BaseBdev3 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.205 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.205 [ 00:35:35.205 { 00:35:35.205 "name": "BaseBdev3", 00:35:35.205 "aliases": [ 00:35:35.205 "45fcd9ea-4950-45b1-ab6b-12bcf80a5a3c" 00:35:35.205 ], 00:35:35.205 "product_name": "Malloc disk", 00:35:35.205 "block_size": 512, 00:35:35.205 "num_blocks": 65536, 00:35:35.205 "uuid": "45fcd9ea-4950-45b1-ab6b-12bcf80a5a3c", 00:35:35.205 "assigned_rate_limits": { 00:35:35.205 "rw_ios_per_sec": 0, 00:35:35.205 "rw_mbytes_per_sec": 0, 00:35:35.205 "r_mbytes_per_sec": 0, 00:35:35.205 "w_mbytes_per_sec": 0 00:35:35.205 }, 00:35:35.205 "claimed": true, 00:35:35.205 "claim_type": "exclusive_write", 00:35:35.205 "zoned": false, 00:35:35.205 "supported_io_types": { 00:35:35.205 "read": true, 00:35:35.205 "write": true, 00:35:35.205 "unmap": true, 00:35:35.205 "flush": true, 00:35:35.205 "reset": true, 00:35:35.205 "nvme_admin": false, 00:35:35.205 "nvme_io": false, 00:35:35.205 "nvme_io_md": false, 00:35:35.206 "write_zeroes": true, 00:35:35.206 "zcopy": true, 00:35:35.206 "get_zone_info": false, 00:35:35.206 "zone_management": false, 00:35:35.206 "zone_append": false, 00:35:35.206 "compare": false, 00:35:35.206 "compare_and_write": false, 00:35:35.206 "abort": true, 00:35:35.206 "seek_hole": false, 00:35:35.206 "seek_data": false, 00:35:35.206 "copy": true, 00:35:35.206 "nvme_iov_md": false 00:35:35.206 }, 00:35:35.206 "memory_domains": [ 00:35:35.206 { 00:35:35.206 "dma_device_id": "system", 00:35:35.206 "dma_device_type": 1 00:35:35.206 }, 00:35:35.206 { 00:35:35.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:35.206 "dma_device_type": 2 00:35:35.206 } 00:35:35.206 ], 00:35:35.206 "driver_specific": {} 00:35:35.206 } 00:35:35.206 ] 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:35.206 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:35.465 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:35.465 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:35.465 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.465 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.465 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.465 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:35.465 "name": "Existed_Raid", 00:35:35.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.465 "strip_size_kb": 64, 00:35:35.465 "state": "configuring", 00:35:35.465 "raid_level": "concat", 00:35:35.465 "superblock": false, 00:35:35.465 "num_base_bdevs": 4, 00:35:35.465 "num_base_bdevs_discovered": 3, 00:35:35.465 "num_base_bdevs_operational": 4, 00:35:35.465 "base_bdevs_list": [ 00:35:35.465 { 00:35:35.465 "name": "BaseBdev1", 00:35:35.465 "uuid": "877d5994-013b-4331-98fe-cadb1becc474", 00:35:35.465 "is_configured": true, 00:35:35.465 "data_offset": 0, 00:35:35.465 "data_size": 65536 00:35:35.465 }, 00:35:35.465 { 00:35:35.465 "name": "BaseBdev2", 00:35:35.465 "uuid": "7070c505-6f40-4da7-8074-355e7bf61efc", 00:35:35.465 "is_configured": true, 00:35:35.465 "data_offset": 0, 00:35:35.465 "data_size": 65536 00:35:35.465 }, 00:35:35.465 { 00:35:35.465 "name": "BaseBdev3", 00:35:35.465 "uuid": "45fcd9ea-4950-45b1-ab6b-12bcf80a5a3c", 00:35:35.465 "is_configured": true, 00:35:35.465 "data_offset": 0, 00:35:35.465 "data_size": 65536 00:35:35.465 }, 00:35:35.465 { 00:35:35.465 "name": "BaseBdev4", 00:35:35.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.465 "is_configured": false, 00:35:35.465 "data_offset": 0, 00:35:35.465 "data_size": 0 00:35:35.465 } 00:35:35.465 ] 00:35:35.465 }' 00:35:35.465 23:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:35.465 23:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.726 [2024-12-09 23:17:16.310696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:35.726 [2024-12-09 23:17:16.310763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:35.726 [2024-12-09 23:17:16.310775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:35:35.726 [2024-12-09 23:17:16.311093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:35.726 [2024-12-09 23:17:16.311253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:35.726 [2024-12-09 23:17:16.311268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:35.726 [2024-12-09 23:17:16.311599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:35.726 BaseBdev4 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.726 [ 00:35:35.726 { 00:35:35.726 "name": "BaseBdev4", 00:35:35.726 "aliases": [ 00:35:35.726 "d5dcf3e1-bcdb-46e2-9e1c-07e46c0d4d7a" 00:35:35.726 ], 00:35:35.726 "product_name": "Malloc disk", 00:35:35.726 "block_size": 512, 00:35:35.726 "num_blocks": 65536, 00:35:35.726 "uuid": "d5dcf3e1-bcdb-46e2-9e1c-07e46c0d4d7a", 00:35:35.726 "assigned_rate_limits": { 00:35:35.726 "rw_ios_per_sec": 0, 00:35:35.726 "rw_mbytes_per_sec": 0, 00:35:35.726 "r_mbytes_per_sec": 0, 00:35:35.726 "w_mbytes_per_sec": 0 00:35:35.726 }, 00:35:35.726 "claimed": true, 00:35:35.726 "claim_type": "exclusive_write", 00:35:35.726 "zoned": false, 00:35:35.726 "supported_io_types": { 00:35:35.726 "read": true, 00:35:35.726 "write": true, 00:35:35.726 "unmap": true, 00:35:35.726 "flush": true, 00:35:35.726 "reset": true, 00:35:35.726 "nvme_admin": false, 00:35:35.726 "nvme_io": false, 00:35:35.726 "nvme_io_md": false, 00:35:35.726 "write_zeroes": true, 00:35:35.726 "zcopy": true, 00:35:35.726 "get_zone_info": false, 00:35:35.726 "zone_management": false, 00:35:35.726 "zone_append": false, 00:35:35.726 "compare": false, 00:35:35.726 "compare_and_write": false, 00:35:35.726 "abort": true, 00:35:35.726 "seek_hole": false, 00:35:35.726 "seek_data": false, 00:35:35.726 "copy": true, 00:35:35.726 "nvme_iov_md": false 00:35:35.726 }, 00:35:35.726 "memory_domains": [ 00:35:35.726 { 00:35:35.726 "dma_device_id": "system", 00:35:35.726 "dma_device_type": 1 00:35:35.726 }, 00:35:35.726 { 00:35:35.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:35.726 "dma_device_type": 2 00:35:35.726 } 00:35:35.726 ], 00:35:35.726 "driver_specific": {} 00:35:35.726 } 00:35:35.726 ] 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:35.726 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:35.985 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:35.985 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:35.985 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.985 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.985 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.985 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:35.985 "name": "Existed_Raid", 00:35:35.985 "uuid": "d4513d24-434c-4bf4-96f9-5bcdf23460d8", 00:35:35.985 "strip_size_kb": 64, 00:35:35.985 "state": "online", 00:35:35.985 "raid_level": "concat", 00:35:35.985 "superblock": false, 00:35:35.985 "num_base_bdevs": 4, 00:35:35.985 "num_base_bdevs_discovered": 4, 00:35:35.985 "num_base_bdevs_operational": 4, 00:35:35.985 "base_bdevs_list": [ 00:35:35.985 { 00:35:35.985 "name": "BaseBdev1", 00:35:35.985 "uuid": "877d5994-013b-4331-98fe-cadb1becc474", 00:35:35.985 "is_configured": true, 00:35:35.985 "data_offset": 0, 00:35:35.985 "data_size": 65536 00:35:35.985 }, 00:35:35.985 { 00:35:35.985 "name": "BaseBdev2", 00:35:35.985 "uuid": "7070c505-6f40-4da7-8074-355e7bf61efc", 00:35:35.985 "is_configured": true, 00:35:35.985 "data_offset": 0, 00:35:35.985 "data_size": 65536 00:35:35.985 }, 00:35:35.985 { 00:35:35.985 "name": "BaseBdev3", 00:35:35.985 "uuid": "45fcd9ea-4950-45b1-ab6b-12bcf80a5a3c", 00:35:35.985 "is_configured": true, 00:35:35.985 "data_offset": 0, 00:35:35.985 "data_size": 65536 00:35:35.985 }, 00:35:35.985 { 00:35:35.985 "name": "BaseBdev4", 00:35:35.985 "uuid": "d5dcf3e1-bcdb-46e2-9e1c-07e46c0d4d7a", 00:35:35.985 "is_configured": true, 00:35:35.985 "data_offset": 0, 00:35:35.985 "data_size": 65536 00:35:35.985 } 00:35:35.985 ] 00:35:35.985 }' 00:35:35.985 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:35.985 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.244 [2024-12-09 23:17:16.822807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.244 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:36.244 "name": "Existed_Raid", 00:35:36.244 "aliases": [ 00:35:36.244 "d4513d24-434c-4bf4-96f9-5bcdf23460d8" 00:35:36.244 ], 00:35:36.244 "product_name": "Raid Volume", 00:35:36.244 "block_size": 512, 00:35:36.244 "num_blocks": 262144, 00:35:36.244 "uuid": "d4513d24-434c-4bf4-96f9-5bcdf23460d8", 00:35:36.244 "assigned_rate_limits": { 00:35:36.244 "rw_ios_per_sec": 0, 00:35:36.244 "rw_mbytes_per_sec": 0, 00:35:36.244 "r_mbytes_per_sec": 0, 00:35:36.244 "w_mbytes_per_sec": 0 00:35:36.244 }, 00:35:36.244 "claimed": false, 00:35:36.244 "zoned": false, 00:35:36.244 "supported_io_types": { 00:35:36.244 "read": true, 00:35:36.244 "write": true, 00:35:36.244 "unmap": true, 00:35:36.244 "flush": true, 00:35:36.244 "reset": true, 00:35:36.244 "nvme_admin": false, 00:35:36.244 "nvme_io": false, 00:35:36.244 "nvme_io_md": false, 00:35:36.244 "write_zeroes": true, 00:35:36.244 "zcopy": false, 00:35:36.244 "get_zone_info": false, 00:35:36.244 "zone_management": false, 00:35:36.244 "zone_append": false, 00:35:36.244 "compare": false, 00:35:36.244 "compare_and_write": false, 00:35:36.244 "abort": false, 00:35:36.244 "seek_hole": false, 00:35:36.244 "seek_data": false, 00:35:36.244 "copy": false, 00:35:36.244 "nvme_iov_md": false 00:35:36.244 }, 00:35:36.244 "memory_domains": [ 00:35:36.244 { 00:35:36.244 "dma_device_id": "system", 00:35:36.244 "dma_device_type": 1 00:35:36.244 }, 00:35:36.244 { 00:35:36.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:36.244 "dma_device_type": 2 00:35:36.244 }, 00:35:36.244 { 00:35:36.244 "dma_device_id": "system", 00:35:36.244 "dma_device_type": 1 00:35:36.244 }, 00:35:36.244 { 00:35:36.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:36.244 "dma_device_type": 2 00:35:36.244 }, 00:35:36.244 { 00:35:36.244 "dma_device_id": "system", 00:35:36.244 "dma_device_type": 1 00:35:36.244 }, 00:35:36.244 { 00:35:36.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:36.244 "dma_device_type": 2 00:35:36.244 }, 00:35:36.244 { 00:35:36.244 "dma_device_id": "system", 00:35:36.244 "dma_device_type": 1 00:35:36.244 }, 00:35:36.244 { 00:35:36.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:36.244 "dma_device_type": 2 00:35:36.244 } 00:35:36.244 ], 00:35:36.244 "driver_specific": { 00:35:36.244 "raid": { 00:35:36.244 "uuid": "d4513d24-434c-4bf4-96f9-5bcdf23460d8", 00:35:36.244 "strip_size_kb": 64, 00:35:36.244 "state": "online", 00:35:36.244 "raid_level": "concat", 00:35:36.244 "superblock": false, 00:35:36.244 "num_base_bdevs": 4, 00:35:36.244 "num_base_bdevs_discovered": 4, 00:35:36.244 "num_base_bdevs_operational": 4, 00:35:36.244 "base_bdevs_list": [ 00:35:36.244 { 00:35:36.244 "name": "BaseBdev1", 00:35:36.244 "uuid": "877d5994-013b-4331-98fe-cadb1becc474", 00:35:36.244 "is_configured": true, 00:35:36.244 "data_offset": 0, 00:35:36.244 "data_size": 65536 00:35:36.244 }, 00:35:36.244 { 00:35:36.244 "name": "BaseBdev2", 00:35:36.245 "uuid": "7070c505-6f40-4da7-8074-355e7bf61efc", 00:35:36.245 "is_configured": true, 00:35:36.245 "data_offset": 0, 00:35:36.245 "data_size": 65536 00:35:36.245 }, 00:35:36.245 { 00:35:36.245 "name": "BaseBdev3", 00:35:36.245 "uuid": "45fcd9ea-4950-45b1-ab6b-12bcf80a5a3c", 00:35:36.245 "is_configured": true, 00:35:36.245 "data_offset": 0, 00:35:36.245 "data_size": 65536 00:35:36.245 }, 00:35:36.245 { 00:35:36.245 "name": "BaseBdev4", 00:35:36.245 "uuid": "d5dcf3e1-bcdb-46e2-9e1c-07e46c0d4d7a", 00:35:36.245 "is_configured": true, 00:35:36.245 "data_offset": 0, 00:35:36.245 "data_size": 65536 00:35:36.245 } 00:35:36.245 ] 00:35:36.245 } 00:35:36.245 } 00:35:36.245 }' 00:35:36.245 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:36.503 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:36.503 BaseBdev2 00:35:36.503 BaseBdev3 00:35:36.503 BaseBdev4' 00:35:36.503 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:36.503 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:36.503 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:36.503 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:36.503 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.503 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.503 23:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:36.503 23:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:36.503 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.762 [2024-12-09 23:17:17.150497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:36.762 [2024-12-09 23:17:17.150533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:36.762 [2024-12-09 23:17:17.150591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:36.762 "name": "Existed_Raid", 00:35:36.762 "uuid": "d4513d24-434c-4bf4-96f9-5bcdf23460d8", 00:35:36.762 "strip_size_kb": 64, 00:35:36.762 "state": "offline", 00:35:36.762 "raid_level": "concat", 00:35:36.762 "superblock": false, 00:35:36.762 "num_base_bdevs": 4, 00:35:36.762 "num_base_bdevs_discovered": 3, 00:35:36.762 "num_base_bdevs_operational": 3, 00:35:36.762 "base_bdevs_list": [ 00:35:36.762 { 00:35:36.762 "name": null, 00:35:36.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:36.762 "is_configured": false, 00:35:36.762 "data_offset": 0, 00:35:36.762 "data_size": 65536 00:35:36.762 }, 00:35:36.762 { 00:35:36.762 "name": "BaseBdev2", 00:35:36.762 "uuid": "7070c505-6f40-4da7-8074-355e7bf61efc", 00:35:36.762 "is_configured": true, 00:35:36.762 "data_offset": 0, 00:35:36.762 "data_size": 65536 00:35:36.762 }, 00:35:36.762 { 00:35:36.762 "name": "BaseBdev3", 00:35:36.762 "uuid": "45fcd9ea-4950-45b1-ab6b-12bcf80a5a3c", 00:35:36.762 "is_configured": true, 00:35:36.762 "data_offset": 0, 00:35:36.762 "data_size": 65536 00:35:36.762 }, 00:35:36.762 { 00:35:36.762 "name": "BaseBdev4", 00:35:36.762 "uuid": "d5dcf3e1-bcdb-46e2-9e1c-07e46c0d4d7a", 00:35:36.762 "is_configured": true, 00:35:36.762 "data_offset": 0, 00:35:36.762 "data_size": 65536 00:35:36.762 } 00:35:36.762 ] 00:35:36.762 }' 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:36.762 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.328 [2024-12-09 23:17:17.729600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.328 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.328 [2024-12-09 23:17:17.885227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:37.588 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.588 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:37.588 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:37.588 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.588 23:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:37.588 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.588 23:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.588 [2024-12-09 23:17:18.036731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:37.588 [2024-12-09 23:17:18.036788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.588 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.849 BaseBdev2 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.849 [ 00:35:37.849 { 00:35:37.849 "name": "BaseBdev2", 00:35:37.849 "aliases": [ 00:35:37.849 "40958b14-a115-4e34-adcd-aff75696dcec" 00:35:37.849 ], 00:35:37.849 "product_name": "Malloc disk", 00:35:37.849 "block_size": 512, 00:35:37.849 "num_blocks": 65536, 00:35:37.849 "uuid": "40958b14-a115-4e34-adcd-aff75696dcec", 00:35:37.849 "assigned_rate_limits": { 00:35:37.849 "rw_ios_per_sec": 0, 00:35:37.849 "rw_mbytes_per_sec": 0, 00:35:37.849 "r_mbytes_per_sec": 0, 00:35:37.849 "w_mbytes_per_sec": 0 00:35:37.849 }, 00:35:37.849 "claimed": false, 00:35:37.849 "zoned": false, 00:35:37.849 "supported_io_types": { 00:35:37.849 "read": true, 00:35:37.849 "write": true, 00:35:37.849 "unmap": true, 00:35:37.849 "flush": true, 00:35:37.849 "reset": true, 00:35:37.849 "nvme_admin": false, 00:35:37.849 "nvme_io": false, 00:35:37.849 "nvme_io_md": false, 00:35:37.849 "write_zeroes": true, 00:35:37.849 "zcopy": true, 00:35:37.849 "get_zone_info": false, 00:35:37.849 "zone_management": false, 00:35:37.849 "zone_append": false, 00:35:37.849 "compare": false, 00:35:37.849 "compare_and_write": false, 00:35:37.849 "abort": true, 00:35:37.849 "seek_hole": false, 00:35:37.849 "seek_data": false, 00:35:37.849 "copy": true, 00:35:37.849 "nvme_iov_md": false 00:35:37.849 }, 00:35:37.849 "memory_domains": [ 00:35:37.849 { 00:35:37.849 "dma_device_id": "system", 00:35:37.849 "dma_device_type": 1 00:35:37.849 }, 00:35:37.849 { 00:35:37.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:37.849 "dma_device_type": 2 00:35:37.849 } 00:35:37.849 ], 00:35:37.849 "driver_specific": {} 00:35:37.849 } 00:35:37.849 ] 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.849 BaseBdev3 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.849 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.849 [ 00:35:37.849 { 00:35:37.849 "name": "BaseBdev3", 00:35:37.849 "aliases": [ 00:35:37.849 "32df930d-d0a4-4b04-970c-1cb81bf9dfdb" 00:35:37.849 ], 00:35:37.849 "product_name": "Malloc disk", 00:35:37.849 "block_size": 512, 00:35:37.849 "num_blocks": 65536, 00:35:37.849 "uuid": "32df930d-d0a4-4b04-970c-1cb81bf9dfdb", 00:35:37.849 "assigned_rate_limits": { 00:35:37.849 "rw_ios_per_sec": 0, 00:35:37.849 "rw_mbytes_per_sec": 0, 00:35:37.849 "r_mbytes_per_sec": 0, 00:35:37.849 "w_mbytes_per_sec": 0 00:35:37.849 }, 00:35:37.849 "claimed": false, 00:35:37.849 "zoned": false, 00:35:37.849 "supported_io_types": { 00:35:37.849 "read": true, 00:35:37.849 "write": true, 00:35:37.849 "unmap": true, 00:35:37.849 "flush": true, 00:35:37.849 "reset": true, 00:35:37.849 "nvme_admin": false, 00:35:37.849 "nvme_io": false, 00:35:37.849 "nvme_io_md": false, 00:35:37.849 "write_zeroes": true, 00:35:37.849 "zcopy": true, 00:35:37.849 "get_zone_info": false, 00:35:37.849 "zone_management": false, 00:35:37.849 "zone_append": false, 00:35:37.849 "compare": false, 00:35:37.849 "compare_and_write": false, 00:35:37.849 "abort": true, 00:35:37.849 "seek_hole": false, 00:35:37.849 "seek_data": false, 00:35:37.850 "copy": true, 00:35:37.850 "nvme_iov_md": false 00:35:37.850 }, 00:35:37.850 "memory_domains": [ 00:35:37.850 { 00:35:37.850 "dma_device_id": "system", 00:35:37.850 "dma_device_type": 1 00:35:37.850 }, 00:35:37.850 { 00:35:37.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:37.850 "dma_device_type": 2 00:35:37.850 } 00:35:37.850 ], 00:35:37.850 "driver_specific": {} 00:35:37.850 } 00:35:37.850 ] 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.850 BaseBdev4 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.850 [ 00:35:37.850 { 00:35:37.850 "name": "BaseBdev4", 00:35:37.850 "aliases": [ 00:35:37.850 "3074d1f2-3859-4a89-8c05-0ee3d5211270" 00:35:37.850 ], 00:35:37.850 "product_name": "Malloc disk", 00:35:37.850 "block_size": 512, 00:35:37.850 "num_blocks": 65536, 00:35:37.850 "uuid": "3074d1f2-3859-4a89-8c05-0ee3d5211270", 00:35:37.850 "assigned_rate_limits": { 00:35:37.850 "rw_ios_per_sec": 0, 00:35:37.850 "rw_mbytes_per_sec": 0, 00:35:37.850 "r_mbytes_per_sec": 0, 00:35:37.850 "w_mbytes_per_sec": 0 00:35:37.850 }, 00:35:37.850 "claimed": false, 00:35:37.850 "zoned": false, 00:35:37.850 "supported_io_types": { 00:35:37.850 "read": true, 00:35:37.850 "write": true, 00:35:37.850 "unmap": true, 00:35:37.850 "flush": true, 00:35:37.850 "reset": true, 00:35:37.850 "nvme_admin": false, 00:35:37.850 "nvme_io": false, 00:35:37.850 "nvme_io_md": false, 00:35:37.850 "write_zeroes": true, 00:35:37.850 "zcopy": true, 00:35:37.850 "get_zone_info": false, 00:35:37.850 "zone_management": false, 00:35:37.850 "zone_append": false, 00:35:37.850 "compare": false, 00:35:37.850 "compare_and_write": false, 00:35:37.850 "abort": true, 00:35:37.850 "seek_hole": false, 00:35:37.850 "seek_data": false, 00:35:37.850 "copy": true, 00:35:37.850 "nvme_iov_md": false 00:35:37.850 }, 00:35:37.850 "memory_domains": [ 00:35:37.850 { 00:35:37.850 "dma_device_id": "system", 00:35:37.850 "dma_device_type": 1 00:35:37.850 }, 00:35:37.850 { 00:35:37.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:37.850 "dma_device_type": 2 00:35:37.850 } 00:35:37.850 ], 00:35:37.850 "driver_specific": {} 00:35:37.850 } 00:35:37.850 ] 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.850 [2024-12-09 23:17:18.438946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:37.850 [2024-12-09 23:17:18.439148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:37.850 [2024-12-09 23:17:18.439265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:37.850 [2024-12-09 23:17:18.441804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:37.850 [2024-12-09 23:17:18.442003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:37.850 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.108 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:38.108 "name": "Existed_Raid", 00:35:38.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.108 "strip_size_kb": 64, 00:35:38.108 "state": "configuring", 00:35:38.108 "raid_level": "concat", 00:35:38.108 "superblock": false, 00:35:38.108 "num_base_bdevs": 4, 00:35:38.108 "num_base_bdevs_discovered": 3, 00:35:38.108 "num_base_bdevs_operational": 4, 00:35:38.108 "base_bdevs_list": [ 00:35:38.108 { 00:35:38.108 "name": "BaseBdev1", 00:35:38.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.108 "is_configured": false, 00:35:38.108 "data_offset": 0, 00:35:38.108 "data_size": 0 00:35:38.108 }, 00:35:38.108 { 00:35:38.108 "name": "BaseBdev2", 00:35:38.108 "uuid": "40958b14-a115-4e34-adcd-aff75696dcec", 00:35:38.108 "is_configured": true, 00:35:38.108 "data_offset": 0, 00:35:38.108 "data_size": 65536 00:35:38.108 }, 00:35:38.108 { 00:35:38.108 "name": "BaseBdev3", 00:35:38.108 "uuid": "32df930d-d0a4-4b04-970c-1cb81bf9dfdb", 00:35:38.108 "is_configured": true, 00:35:38.108 "data_offset": 0, 00:35:38.108 "data_size": 65536 00:35:38.108 }, 00:35:38.108 { 00:35:38.108 "name": "BaseBdev4", 00:35:38.108 "uuid": "3074d1f2-3859-4a89-8c05-0ee3d5211270", 00:35:38.108 "is_configured": true, 00:35:38.108 "data_offset": 0, 00:35:38.108 "data_size": 65536 00:35:38.108 } 00:35:38.108 ] 00:35:38.108 }' 00:35:38.108 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:38.108 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.367 [2024-12-09 23:17:18.878448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.367 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:38.367 "name": "Existed_Raid", 00:35:38.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.367 "strip_size_kb": 64, 00:35:38.367 "state": "configuring", 00:35:38.367 "raid_level": "concat", 00:35:38.367 "superblock": false, 00:35:38.367 "num_base_bdevs": 4, 00:35:38.367 "num_base_bdevs_discovered": 2, 00:35:38.367 "num_base_bdevs_operational": 4, 00:35:38.367 "base_bdevs_list": [ 00:35:38.367 { 00:35:38.367 "name": "BaseBdev1", 00:35:38.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.367 "is_configured": false, 00:35:38.368 "data_offset": 0, 00:35:38.368 "data_size": 0 00:35:38.368 }, 00:35:38.368 { 00:35:38.368 "name": null, 00:35:38.368 "uuid": "40958b14-a115-4e34-adcd-aff75696dcec", 00:35:38.368 "is_configured": false, 00:35:38.368 "data_offset": 0, 00:35:38.368 "data_size": 65536 00:35:38.368 }, 00:35:38.368 { 00:35:38.368 "name": "BaseBdev3", 00:35:38.368 "uuid": "32df930d-d0a4-4b04-970c-1cb81bf9dfdb", 00:35:38.368 "is_configured": true, 00:35:38.368 "data_offset": 0, 00:35:38.368 "data_size": 65536 00:35:38.368 }, 00:35:38.368 { 00:35:38.368 "name": "BaseBdev4", 00:35:38.368 "uuid": "3074d1f2-3859-4a89-8c05-0ee3d5211270", 00:35:38.368 "is_configured": true, 00:35:38.368 "data_offset": 0, 00:35:38.368 "data_size": 65536 00:35:38.368 } 00:35:38.368 ] 00:35:38.368 }' 00:35:38.368 23:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:38.368 23:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.935 [2024-12-09 23:17:19.387259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:38.935 BaseBdev1 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:38.935 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.936 [ 00:35:38.936 { 00:35:38.936 "name": "BaseBdev1", 00:35:38.936 "aliases": [ 00:35:38.936 "e53124e2-0090-4f88-8ebf-a4c1521e4526" 00:35:38.936 ], 00:35:38.936 "product_name": "Malloc disk", 00:35:38.936 "block_size": 512, 00:35:38.936 "num_blocks": 65536, 00:35:38.936 "uuid": "e53124e2-0090-4f88-8ebf-a4c1521e4526", 00:35:38.936 "assigned_rate_limits": { 00:35:38.936 "rw_ios_per_sec": 0, 00:35:38.936 "rw_mbytes_per_sec": 0, 00:35:38.936 "r_mbytes_per_sec": 0, 00:35:38.936 "w_mbytes_per_sec": 0 00:35:38.936 }, 00:35:38.936 "claimed": true, 00:35:38.936 "claim_type": "exclusive_write", 00:35:38.936 "zoned": false, 00:35:38.936 "supported_io_types": { 00:35:38.936 "read": true, 00:35:38.936 "write": true, 00:35:38.936 "unmap": true, 00:35:38.936 "flush": true, 00:35:38.936 "reset": true, 00:35:38.936 "nvme_admin": false, 00:35:38.936 "nvme_io": false, 00:35:38.936 "nvme_io_md": false, 00:35:38.936 "write_zeroes": true, 00:35:38.936 "zcopy": true, 00:35:38.936 "get_zone_info": false, 00:35:38.936 "zone_management": false, 00:35:38.936 "zone_append": false, 00:35:38.936 "compare": false, 00:35:38.936 "compare_and_write": false, 00:35:38.936 "abort": true, 00:35:38.936 "seek_hole": false, 00:35:38.936 "seek_data": false, 00:35:38.936 "copy": true, 00:35:38.936 "nvme_iov_md": false 00:35:38.936 }, 00:35:38.936 "memory_domains": [ 00:35:38.936 { 00:35:38.936 "dma_device_id": "system", 00:35:38.936 "dma_device_type": 1 00:35:38.936 }, 00:35:38.936 { 00:35:38.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:38.936 "dma_device_type": 2 00:35:38.936 } 00:35:38.936 ], 00:35:38.936 "driver_specific": {} 00:35:38.936 } 00:35:38.936 ] 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:38.936 "name": "Existed_Raid", 00:35:38.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.936 "strip_size_kb": 64, 00:35:38.936 "state": "configuring", 00:35:38.936 "raid_level": "concat", 00:35:38.936 "superblock": false, 00:35:38.936 "num_base_bdevs": 4, 00:35:38.936 "num_base_bdevs_discovered": 3, 00:35:38.936 "num_base_bdevs_operational": 4, 00:35:38.936 "base_bdevs_list": [ 00:35:38.936 { 00:35:38.936 "name": "BaseBdev1", 00:35:38.936 "uuid": "e53124e2-0090-4f88-8ebf-a4c1521e4526", 00:35:38.936 "is_configured": true, 00:35:38.936 "data_offset": 0, 00:35:38.936 "data_size": 65536 00:35:38.936 }, 00:35:38.936 { 00:35:38.936 "name": null, 00:35:38.936 "uuid": "40958b14-a115-4e34-adcd-aff75696dcec", 00:35:38.936 "is_configured": false, 00:35:38.936 "data_offset": 0, 00:35:38.936 "data_size": 65536 00:35:38.936 }, 00:35:38.936 { 00:35:38.936 "name": "BaseBdev3", 00:35:38.936 "uuid": "32df930d-d0a4-4b04-970c-1cb81bf9dfdb", 00:35:38.936 "is_configured": true, 00:35:38.936 "data_offset": 0, 00:35:38.936 "data_size": 65536 00:35:38.936 }, 00:35:38.936 { 00:35:38.936 "name": "BaseBdev4", 00:35:38.936 "uuid": "3074d1f2-3859-4a89-8c05-0ee3d5211270", 00:35:38.936 "is_configured": true, 00:35:38.936 "data_offset": 0, 00:35:38.936 "data_size": 65536 00:35:38.936 } 00:35:38.936 ] 00:35:38.936 }' 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:38.936 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.534 [2024-12-09 23:17:19.946621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:39.534 "name": "Existed_Raid", 00:35:39.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.534 "strip_size_kb": 64, 00:35:39.534 "state": "configuring", 00:35:39.534 "raid_level": "concat", 00:35:39.534 "superblock": false, 00:35:39.534 "num_base_bdevs": 4, 00:35:39.534 "num_base_bdevs_discovered": 2, 00:35:39.534 "num_base_bdevs_operational": 4, 00:35:39.534 "base_bdevs_list": [ 00:35:39.534 { 00:35:39.534 "name": "BaseBdev1", 00:35:39.534 "uuid": "e53124e2-0090-4f88-8ebf-a4c1521e4526", 00:35:39.534 "is_configured": true, 00:35:39.534 "data_offset": 0, 00:35:39.534 "data_size": 65536 00:35:39.534 }, 00:35:39.534 { 00:35:39.534 "name": null, 00:35:39.534 "uuid": "40958b14-a115-4e34-adcd-aff75696dcec", 00:35:39.534 "is_configured": false, 00:35:39.534 "data_offset": 0, 00:35:39.534 "data_size": 65536 00:35:39.534 }, 00:35:39.534 { 00:35:39.534 "name": null, 00:35:39.534 "uuid": "32df930d-d0a4-4b04-970c-1cb81bf9dfdb", 00:35:39.534 "is_configured": false, 00:35:39.534 "data_offset": 0, 00:35:39.534 "data_size": 65536 00:35:39.534 }, 00:35:39.534 { 00:35:39.534 "name": "BaseBdev4", 00:35:39.534 "uuid": "3074d1f2-3859-4a89-8c05-0ee3d5211270", 00:35:39.534 "is_configured": true, 00:35:39.534 "data_offset": 0, 00:35:39.534 "data_size": 65536 00:35:39.534 } 00:35:39.534 ] 00:35:39.534 }' 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:39.534 23:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.793 [2024-12-09 23:17:20.390443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.793 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.051 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.051 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:40.051 "name": "Existed_Raid", 00:35:40.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:40.051 "strip_size_kb": 64, 00:35:40.051 "state": "configuring", 00:35:40.051 "raid_level": "concat", 00:35:40.051 "superblock": false, 00:35:40.051 "num_base_bdevs": 4, 00:35:40.051 "num_base_bdevs_discovered": 3, 00:35:40.051 "num_base_bdevs_operational": 4, 00:35:40.051 "base_bdevs_list": [ 00:35:40.051 { 00:35:40.051 "name": "BaseBdev1", 00:35:40.051 "uuid": "e53124e2-0090-4f88-8ebf-a4c1521e4526", 00:35:40.051 "is_configured": true, 00:35:40.051 "data_offset": 0, 00:35:40.051 "data_size": 65536 00:35:40.051 }, 00:35:40.051 { 00:35:40.051 "name": null, 00:35:40.051 "uuid": "40958b14-a115-4e34-adcd-aff75696dcec", 00:35:40.051 "is_configured": false, 00:35:40.051 "data_offset": 0, 00:35:40.051 "data_size": 65536 00:35:40.052 }, 00:35:40.052 { 00:35:40.052 "name": "BaseBdev3", 00:35:40.052 "uuid": "32df930d-d0a4-4b04-970c-1cb81bf9dfdb", 00:35:40.052 "is_configured": true, 00:35:40.052 "data_offset": 0, 00:35:40.052 "data_size": 65536 00:35:40.052 }, 00:35:40.052 { 00:35:40.052 "name": "BaseBdev4", 00:35:40.052 "uuid": "3074d1f2-3859-4a89-8c05-0ee3d5211270", 00:35:40.052 "is_configured": true, 00:35:40.052 "data_offset": 0, 00:35:40.052 "data_size": 65536 00:35:40.052 } 00:35:40.052 ] 00:35:40.052 }' 00:35:40.052 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:40.052 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.310 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:40.310 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:40.310 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.310 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.310 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.310 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:35:40.310 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:40.310 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.310 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.310 [2024-12-09 23:17:20.866521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:40.569 23:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.569 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:40.569 "name": "Existed_Raid", 00:35:40.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:40.569 "strip_size_kb": 64, 00:35:40.569 "state": "configuring", 00:35:40.569 "raid_level": "concat", 00:35:40.569 "superblock": false, 00:35:40.569 "num_base_bdevs": 4, 00:35:40.569 "num_base_bdevs_discovered": 2, 00:35:40.569 "num_base_bdevs_operational": 4, 00:35:40.569 "base_bdevs_list": [ 00:35:40.569 { 00:35:40.569 "name": null, 00:35:40.569 "uuid": "e53124e2-0090-4f88-8ebf-a4c1521e4526", 00:35:40.569 "is_configured": false, 00:35:40.569 "data_offset": 0, 00:35:40.569 "data_size": 65536 00:35:40.569 }, 00:35:40.569 { 00:35:40.569 "name": null, 00:35:40.569 "uuid": "40958b14-a115-4e34-adcd-aff75696dcec", 00:35:40.569 "is_configured": false, 00:35:40.569 "data_offset": 0, 00:35:40.569 "data_size": 65536 00:35:40.569 }, 00:35:40.569 { 00:35:40.569 "name": "BaseBdev3", 00:35:40.569 "uuid": "32df930d-d0a4-4b04-970c-1cb81bf9dfdb", 00:35:40.569 "is_configured": true, 00:35:40.569 "data_offset": 0, 00:35:40.569 "data_size": 65536 00:35:40.569 }, 00:35:40.569 { 00:35:40.569 "name": "BaseBdev4", 00:35:40.569 "uuid": "3074d1f2-3859-4a89-8c05-0ee3d5211270", 00:35:40.569 "is_configured": true, 00:35:40.569 "data_offset": 0, 00:35:40.569 "data_size": 65536 00:35:40.569 } 00:35:40.569 ] 00:35:40.569 }' 00:35:40.569 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:40.569 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.828 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:40.828 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:40.828 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.828 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.828 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.828 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:35:40.828 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:40.828 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.828 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.087 [2024-12-09 23:17:21.463576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:41.087 "name": "Existed_Raid", 00:35:41.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:41.087 "strip_size_kb": 64, 00:35:41.087 "state": "configuring", 00:35:41.087 "raid_level": "concat", 00:35:41.087 "superblock": false, 00:35:41.087 "num_base_bdevs": 4, 00:35:41.087 "num_base_bdevs_discovered": 3, 00:35:41.087 "num_base_bdevs_operational": 4, 00:35:41.087 "base_bdevs_list": [ 00:35:41.087 { 00:35:41.087 "name": null, 00:35:41.087 "uuid": "e53124e2-0090-4f88-8ebf-a4c1521e4526", 00:35:41.087 "is_configured": false, 00:35:41.087 "data_offset": 0, 00:35:41.087 "data_size": 65536 00:35:41.087 }, 00:35:41.087 { 00:35:41.087 "name": "BaseBdev2", 00:35:41.087 "uuid": "40958b14-a115-4e34-adcd-aff75696dcec", 00:35:41.087 "is_configured": true, 00:35:41.087 "data_offset": 0, 00:35:41.087 "data_size": 65536 00:35:41.087 }, 00:35:41.087 { 00:35:41.087 "name": "BaseBdev3", 00:35:41.087 "uuid": "32df930d-d0a4-4b04-970c-1cb81bf9dfdb", 00:35:41.087 "is_configured": true, 00:35:41.087 "data_offset": 0, 00:35:41.087 "data_size": 65536 00:35:41.087 }, 00:35:41.087 { 00:35:41.087 "name": "BaseBdev4", 00:35:41.087 "uuid": "3074d1f2-3859-4a89-8c05-0ee3d5211270", 00:35:41.087 "is_configured": true, 00:35:41.087 "data_offset": 0, 00:35:41.087 "data_size": 65536 00:35:41.087 } 00:35:41.087 ] 00:35:41.087 }' 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:41.087 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e53124e2-0090-4f88-8ebf-a4c1521e4526 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.346 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.605 [2024-12-09 23:17:21.981697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:41.605 [2024-12-09 23:17:21.981760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:41.605 [2024-12-09 23:17:21.981769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:35:41.605 [2024-12-09 23:17:21.982052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:41.605 [2024-12-09 23:17:21.982186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:41.605 [2024-12-09 23:17:21.982198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:35:41.605 [2024-12-09 23:17:21.982495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:41.605 NewBaseBdev 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:41.605 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.606 23:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.606 [ 00:35:41.606 { 00:35:41.606 "name": "NewBaseBdev", 00:35:41.606 "aliases": [ 00:35:41.606 "e53124e2-0090-4f88-8ebf-a4c1521e4526" 00:35:41.606 ], 00:35:41.606 "product_name": "Malloc disk", 00:35:41.606 "block_size": 512, 00:35:41.606 "num_blocks": 65536, 00:35:41.606 "uuid": "e53124e2-0090-4f88-8ebf-a4c1521e4526", 00:35:41.606 "assigned_rate_limits": { 00:35:41.606 "rw_ios_per_sec": 0, 00:35:41.606 "rw_mbytes_per_sec": 0, 00:35:41.606 "r_mbytes_per_sec": 0, 00:35:41.606 "w_mbytes_per_sec": 0 00:35:41.606 }, 00:35:41.606 "claimed": true, 00:35:41.606 "claim_type": "exclusive_write", 00:35:41.606 "zoned": false, 00:35:41.606 "supported_io_types": { 00:35:41.606 "read": true, 00:35:41.606 "write": true, 00:35:41.606 "unmap": true, 00:35:41.606 "flush": true, 00:35:41.606 "reset": true, 00:35:41.606 "nvme_admin": false, 00:35:41.606 "nvme_io": false, 00:35:41.606 "nvme_io_md": false, 00:35:41.606 "write_zeroes": true, 00:35:41.606 "zcopy": true, 00:35:41.606 "get_zone_info": false, 00:35:41.606 "zone_management": false, 00:35:41.606 "zone_append": false, 00:35:41.606 "compare": false, 00:35:41.606 "compare_and_write": false, 00:35:41.606 "abort": true, 00:35:41.606 "seek_hole": false, 00:35:41.606 "seek_data": false, 00:35:41.606 "copy": true, 00:35:41.606 "nvme_iov_md": false 00:35:41.606 }, 00:35:41.606 "memory_domains": [ 00:35:41.606 { 00:35:41.606 "dma_device_id": "system", 00:35:41.606 "dma_device_type": 1 00:35:41.606 }, 00:35:41.606 { 00:35:41.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:41.606 "dma_device_type": 2 00:35:41.606 } 00:35:41.606 ], 00:35:41.606 "driver_specific": {} 00:35:41.606 } 00:35:41.606 ] 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:41.606 "name": "Existed_Raid", 00:35:41.606 "uuid": "a833f808-11eb-4312-ae68-5a64fdcbbb68", 00:35:41.606 "strip_size_kb": 64, 00:35:41.606 "state": "online", 00:35:41.606 "raid_level": "concat", 00:35:41.606 "superblock": false, 00:35:41.606 "num_base_bdevs": 4, 00:35:41.606 "num_base_bdevs_discovered": 4, 00:35:41.606 "num_base_bdevs_operational": 4, 00:35:41.606 "base_bdevs_list": [ 00:35:41.606 { 00:35:41.606 "name": "NewBaseBdev", 00:35:41.606 "uuid": "e53124e2-0090-4f88-8ebf-a4c1521e4526", 00:35:41.606 "is_configured": true, 00:35:41.606 "data_offset": 0, 00:35:41.606 "data_size": 65536 00:35:41.606 }, 00:35:41.606 { 00:35:41.606 "name": "BaseBdev2", 00:35:41.606 "uuid": "40958b14-a115-4e34-adcd-aff75696dcec", 00:35:41.606 "is_configured": true, 00:35:41.606 "data_offset": 0, 00:35:41.606 "data_size": 65536 00:35:41.606 }, 00:35:41.606 { 00:35:41.606 "name": "BaseBdev3", 00:35:41.606 "uuid": "32df930d-d0a4-4b04-970c-1cb81bf9dfdb", 00:35:41.606 "is_configured": true, 00:35:41.606 "data_offset": 0, 00:35:41.606 "data_size": 65536 00:35:41.606 }, 00:35:41.606 { 00:35:41.606 "name": "BaseBdev4", 00:35:41.606 "uuid": "3074d1f2-3859-4a89-8c05-0ee3d5211270", 00:35:41.606 "is_configured": true, 00:35:41.606 "data_offset": 0, 00:35:41.606 "data_size": 65536 00:35:41.606 } 00:35:41.606 ] 00:35:41.606 }' 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:41.606 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.864 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:35:41.864 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:41.865 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:41.865 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:41.865 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:41.865 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:41.865 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:41.865 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.865 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:41.865 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.865 [2024-12-09 23:17:22.469519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:42.124 "name": "Existed_Raid", 00:35:42.124 "aliases": [ 00:35:42.124 "a833f808-11eb-4312-ae68-5a64fdcbbb68" 00:35:42.124 ], 00:35:42.124 "product_name": "Raid Volume", 00:35:42.124 "block_size": 512, 00:35:42.124 "num_blocks": 262144, 00:35:42.124 "uuid": "a833f808-11eb-4312-ae68-5a64fdcbbb68", 00:35:42.124 "assigned_rate_limits": { 00:35:42.124 "rw_ios_per_sec": 0, 00:35:42.124 "rw_mbytes_per_sec": 0, 00:35:42.124 "r_mbytes_per_sec": 0, 00:35:42.124 "w_mbytes_per_sec": 0 00:35:42.124 }, 00:35:42.124 "claimed": false, 00:35:42.124 "zoned": false, 00:35:42.124 "supported_io_types": { 00:35:42.124 "read": true, 00:35:42.124 "write": true, 00:35:42.124 "unmap": true, 00:35:42.124 "flush": true, 00:35:42.124 "reset": true, 00:35:42.124 "nvme_admin": false, 00:35:42.124 "nvme_io": false, 00:35:42.124 "nvme_io_md": false, 00:35:42.124 "write_zeroes": true, 00:35:42.124 "zcopy": false, 00:35:42.124 "get_zone_info": false, 00:35:42.124 "zone_management": false, 00:35:42.124 "zone_append": false, 00:35:42.124 "compare": false, 00:35:42.124 "compare_and_write": false, 00:35:42.124 "abort": false, 00:35:42.124 "seek_hole": false, 00:35:42.124 "seek_data": false, 00:35:42.124 "copy": false, 00:35:42.124 "nvme_iov_md": false 00:35:42.124 }, 00:35:42.124 "memory_domains": [ 00:35:42.124 { 00:35:42.124 "dma_device_id": "system", 00:35:42.124 "dma_device_type": 1 00:35:42.124 }, 00:35:42.124 { 00:35:42.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:42.124 "dma_device_type": 2 00:35:42.124 }, 00:35:42.124 { 00:35:42.124 "dma_device_id": "system", 00:35:42.124 "dma_device_type": 1 00:35:42.124 }, 00:35:42.124 { 00:35:42.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:42.124 "dma_device_type": 2 00:35:42.124 }, 00:35:42.124 { 00:35:42.124 "dma_device_id": "system", 00:35:42.124 "dma_device_type": 1 00:35:42.124 }, 00:35:42.124 { 00:35:42.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:42.124 "dma_device_type": 2 00:35:42.124 }, 00:35:42.124 { 00:35:42.124 "dma_device_id": "system", 00:35:42.124 "dma_device_type": 1 00:35:42.124 }, 00:35:42.124 { 00:35:42.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:42.124 "dma_device_type": 2 00:35:42.124 } 00:35:42.124 ], 00:35:42.124 "driver_specific": { 00:35:42.124 "raid": { 00:35:42.124 "uuid": "a833f808-11eb-4312-ae68-5a64fdcbbb68", 00:35:42.124 "strip_size_kb": 64, 00:35:42.124 "state": "online", 00:35:42.124 "raid_level": "concat", 00:35:42.124 "superblock": false, 00:35:42.124 "num_base_bdevs": 4, 00:35:42.124 "num_base_bdevs_discovered": 4, 00:35:42.124 "num_base_bdevs_operational": 4, 00:35:42.124 "base_bdevs_list": [ 00:35:42.124 { 00:35:42.124 "name": "NewBaseBdev", 00:35:42.124 "uuid": "e53124e2-0090-4f88-8ebf-a4c1521e4526", 00:35:42.124 "is_configured": true, 00:35:42.124 "data_offset": 0, 00:35:42.124 "data_size": 65536 00:35:42.124 }, 00:35:42.124 { 00:35:42.124 "name": "BaseBdev2", 00:35:42.124 "uuid": "40958b14-a115-4e34-adcd-aff75696dcec", 00:35:42.124 "is_configured": true, 00:35:42.124 "data_offset": 0, 00:35:42.124 "data_size": 65536 00:35:42.124 }, 00:35:42.124 { 00:35:42.124 "name": "BaseBdev3", 00:35:42.124 "uuid": "32df930d-d0a4-4b04-970c-1cb81bf9dfdb", 00:35:42.124 "is_configured": true, 00:35:42.124 "data_offset": 0, 00:35:42.124 "data_size": 65536 00:35:42.124 }, 00:35:42.124 { 00:35:42.124 "name": "BaseBdev4", 00:35:42.124 "uuid": "3074d1f2-3859-4a89-8c05-0ee3d5211270", 00:35:42.124 "is_configured": true, 00:35:42.124 "data_offset": 0, 00:35:42.124 "data_size": 65536 00:35:42.124 } 00:35:42.124 ] 00:35:42.124 } 00:35:42.124 } 00:35:42.124 }' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:35:42.124 BaseBdev2 00:35:42.124 BaseBdev3 00:35:42.124 BaseBdev4' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:42.124 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:42.384 [2024-12-09 23:17:22.816645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:42.384 [2024-12-09 23:17:22.816682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:42.384 [2024-12-09 23:17:22.816773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:42.384 [2024-12-09 23:17:22.816841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:42.384 [2024-12-09 23:17:22.816853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71144 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71144 ']' 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71144 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71144 00:35:42.384 killing process with pid 71144 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71144' 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71144 00:35:42.384 [2024-12-09 23:17:22.859793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:42.384 23:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71144 00:35:42.950 [2024-12-09 23:17:23.285705] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:43.892 23:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:35:43.892 00:35:43.892 real 0m11.637s 00:35:43.892 user 0m18.457s 00:35:43.892 sys 0m2.271s 00:35:43.892 ************************************ 00:35:43.892 END TEST raid_state_function_test 00:35:43.892 ************************************ 00:35:43.892 23:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.892 23:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:43.892 23:17:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:35:43.892 23:17:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:43.892 23:17:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:43.892 23:17:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:44.151 ************************************ 00:35:44.151 START TEST raid_state_function_test_sb 00:35:44.151 ************************************ 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71819 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:44.151 Process raid pid: 71819 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71819' 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71819 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71819 ']' 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.151 23:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:44.151 [2024-12-09 23:17:24.645256] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:44.151 [2024-12-09 23:17:24.645383] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.409 [2024-12-09 23:17:24.816804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.409 [2024-12-09 23:17:24.942011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.667 [2024-12-09 23:17:25.171734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:44.667 [2024-12-09 23:17:25.171783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:44.925 [2024-12-09 23:17:25.543967] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:44.925 [2024-12-09 23:17:25.544034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:44.925 [2024-12-09 23:17:25.544048] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:44.925 [2024-12-09 23:17:25.544062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:44.925 [2024-12-09 23:17:25.544077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:44.925 [2024-12-09 23:17:25.544090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:44.925 [2024-12-09 23:17:25.544098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:44.925 [2024-12-09 23:17:25.544111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:44.925 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:45.183 23:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.183 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:45.183 "name": "Existed_Raid", 00:35:45.183 "uuid": "fe718937-7d30-4159-adf7-5616d5bf15c0", 00:35:45.183 "strip_size_kb": 64, 00:35:45.183 "state": "configuring", 00:35:45.183 "raid_level": "concat", 00:35:45.183 "superblock": true, 00:35:45.183 "num_base_bdevs": 4, 00:35:45.183 "num_base_bdevs_discovered": 0, 00:35:45.183 "num_base_bdevs_operational": 4, 00:35:45.183 "base_bdevs_list": [ 00:35:45.183 { 00:35:45.183 "name": "BaseBdev1", 00:35:45.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.183 "is_configured": false, 00:35:45.183 "data_offset": 0, 00:35:45.183 "data_size": 0 00:35:45.183 }, 00:35:45.183 { 00:35:45.183 "name": "BaseBdev2", 00:35:45.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.183 "is_configured": false, 00:35:45.183 "data_offset": 0, 00:35:45.183 "data_size": 0 00:35:45.183 }, 00:35:45.183 { 00:35:45.183 "name": "BaseBdev3", 00:35:45.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.183 "is_configured": false, 00:35:45.183 "data_offset": 0, 00:35:45.183 "data_size": 0 00:35:45.183 }, 00:35:45.183 { 00:35:45.183 "name": "BaseBdev4", 00:35:45.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.183 "is_configured": false, 00:35:45.183 "data_offset": 0, 00:35:45.183 "data_size": 0 00:35:45.183 } 00:35:45.183 ] 00:35:45.183 }' 00:35:45.183 23:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:45.183 23:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.442 [2024-12-09 23:17:26.011270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:45.442 [2024-12-09 23:17:26.011476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.442 [2024-12-09 23:17:26.019268] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:45.442 [2024-12-09 23:17:26.019461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:45.442 [2024-12-09 23:17:26.019486] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:45.442 [2024-12-09 23:17:26.019505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:45.442 [2024-12-09 23:17:26.019519] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:45.442 [2024-12-09 23:17:26.019535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:45.442 [2024-12-09 23:17:26.019543] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:45.442 [2024-12-09 23:17:26.019555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.442 [2024-12-09 23:17:26.067053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:45.442 BaseBdev1 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:45.442 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:45.443 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:45.443 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.443 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.700 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.700 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:45.700 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.700 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.700 [ 00:35:45.700 { 00:35:45.700 "name": "BaseBdev1", 00:35:45.700 "aliases": [ 00:35:45.700 "0879208a-4341-44bf-aad2-bc825a336797" 00:35:45.700 ], 00:35:45.700 "product_name": "Malloc disk", 00:35:45.700 "block_size": 512, 00:35:45.700 "num_blocks": 65536, 00:35:45.700 "uuid": "0879208a-4341-44bf-aad2-bc825a336797", 00:35:45.700 "assigned_rate_limits": { 00:35:45.700 "rw_ios_per_sec": 0, 00:35:45.700 "rw_mbytes_per_sec": 0, 00:35:45.700 "r_mbytes_per_sec": 0, 00:35:45.700 "w_mbytes_per_sec": 0 00:35:45.700 }, 00:35:45.700 "claimed": true, 00:35:45.700 "claim_type": "exclusive_write", 00:35:45.700 "zoned": false, 00:35:45.700 "supported_io_types": { 00:35:45.700 "read": true, 00:35:45.700 "write": true, 00:35:45.700 "unmap": true, 00:35:45.700 "flush": true, 00:35:45.700 "reset": true, 00:35:45.700 "nvme_admin": false, 00:35:45.700 "nvme_io": false, 00:35:45.700 "nvme_io_md": false, 00:35:45.700 "write_zeroes": true, 00:35:45.701 "zcopy": true, 00:35:45.701 "get_zone_info": false, 00:35:45.701 "zone_management": false, 00:35:45.701 "zone_append": false, 00:35:45.701 "compare": false, 00:35:45.701 "compare_and_write": false, 00:35:45.701 "abort": true, 00:35:45.701 "seek_hole": false, 00:35:45.701 "seek_data": false, 00:35:45.701 "copy": true, 00:35:45.701 "nvme_iov_md": false 00:35:45.701 }, 00:35:45.701 "memory_domains": [ 00:35:45.701 { 00:35:45.701 "dma_device_id": "system", 00:35:45.701 "dma_device_type": 1 00:35:45.701 }, 00:35:45.701 { 00:35:45.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:45.701 "dma_device_type": 2 00:35:45.701 } 00:35:45.701 ], 00:35:45.701 "driver_specific": {} 00:35:45.701 } 00:35:45.701 ] 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:45.701 "name": "Existed_Raid", 00:35:45.701 "uuid": "775b2086-2749-4110-bdd4-d678d576b459", 00:35:45.701 "strip_size_kb": 64, 00:35:45.701 "state": "configuring", 00:35:45.701 "raid_level": "concat", 00:35:45.701 "superblock": true, 00:35:45.701 "num_base_bdevs": 4, 00:35:45.701 "num_base_bdevs_discovered": 1, 00:35:45.701 "num_base_bdevs_operational": 4, 00:35:45.701 "base_bdevs_list": [ 00:35:45.701 { 00:35:45.701 "name": "BaseBdev1", 00:35:45.701 "uuid": "0879208a-4341-44bf-aad2-bc825a336797", 00:35:45.701 "is_configured": true, 00:35:45.701 "data_offset": 2048, 00:35:45.701 "data_size": 63488 00:35:45.701 }, 00:35:45.701 { 00:35:45.701 "name": "BaseBdev2", 00:35:45.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.701 "is_configured": false, 00:35:45.701 "data_offset": 0, 00:35:45.701 "data_size": 0 00:35:45.701 }, 00:35:45.701 { 00:35:45.701 "name": "BaseBdev3", 00:35:45.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.701 "is_configured": false, 00:35:45.701 "data_offset": 0, 00:35:45.701 "data_size": 0 00:35:45.701 }, 00:35:45.701 { 00:35:45.701 "name": "BaseBdev4", 00:35:45.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.701 "is_configured": false, 00:35:45.701 "data_offset": 0, 00:35:45.701 "data_size": 0 00:35:45.701 } 00:35:45.701 ] 00:35:45.701 }' 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:45.701 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.959 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:45.959 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.959 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.959 [2024-12-09 23:17:26.582441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:45.959 [2024-12-09 23:17:26.582517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:45.959 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.959 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:45.959 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.959 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.216 [2024-12-09 23:17:26.594539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:46.216 [2024-12-09 23:17:26.596883] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:46.216 [2024-12-09 23:17:26.596934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:46.216 [2024-12-09 23:17:26.596946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:46.216 [2024-12-09 23:17:26.596963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:46.216 [2024-12-09 23:17:26.596971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:46.216 [2024-12-09 23:17:26.596984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:46.216 "name": "Existed_Raid", 00:35:46.216 "uuid": "797ca40a-ccb6-4bc4-a257-cd6d3dfb5cbb", 00:35:46.216 "strip_size_kb": 64, 00:35:46.216 "state": "configuring", 00:35:46.216 "raid_level": "concat", 00:35:46.216 "superblock": true, 00:35:46.216 "num_base_bdevs": 4, 00:35:46.216 "num_base_bdevs_discovered": 1, 00:35:46.216 "num_base_bdevs_operational": 4, 00:35:46.216 "base_bdevs_list": [ 00:35:46.216 { 00:35:46.216 "name": "BaseBdev1", 00:35:46.216 "uuid": "0879208a-4341-44bf-aad2-bc825a336797", 00:35:46.216 "is_configured": true, 00:35:46.216 "data_offset": 2048, 00:35:46.216 "data_size": 63488 00:35:46.216 }, 00:35:46.216 { 00:35:46.216 "name": "BaseBdev2", 00:35:46.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.216 "is_configured": false, 00:35:46.216 "data_offset": 0, 00:35:46.216 "data_size": 0 00:35:46.216 }, 00:35:46.216 { 00:35:46.216 "name": "BaseBdev3", 00:35:46.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.216 "is_configured": false, 00:35:46.216 "data_offset": 0, 00:35:46.216 "data_size": 0 00:35:46.216 }, 00:35:46.216 { 00:35:46.216 "name": "BaseBdev4", 00:35:46.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.216 "is_configured": false, 00:35:46.216 "data_offset": 0, 00:35:46.216 "data_size": 0 00:35:46.216 } 00:35:46.216 ] 00:35:46.216 }' 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:46.216 23:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.474 [2024-12-09 23:17:27.080279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:46.474 BaseBdev2 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.474 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.474 [ 00:35:46.474 { 00:35:46.474 "name": "BaseBdev2", 00:35:46.474 "aliases": [ 00:35:46.474 "ce6504b1-44b4-48f7-bb84-cd8246ee6225" 00:35:46.474 ], 00:35:46.474 "product_name": "Malloc disk", 00:35:46.474 "block_size": 512, 00:35:46.474 "num_blocks": 65536, 00:35:46.733 "uuid": "ce6504b1-44b4-48f7-bb84-cd8246ee6225", 00:35:46.733 "assigned_rate_limits": { 00:35:46.733 "rw_ios_per_sec": 0, 00:35:46.733 "rw_mbytes_per_sec": 0, 00:35:46.733 "r_mbytes_per_sec": 0, 00:35:46.733 "w_mbytes_per_sec": 0 00:35:46.733 }, 00:35:46.733 "claimed": true, 00:35:46.733 "claim_type": "exclusive_write", 00:35:46.733 "zoned": false, 00:35:46.733 "supported_io_types": { 00:35:46.733 "read": true, 00:35:46.733 "write": true, 00:35:46.733 "unmap": true, 00:35:46.733 "flush": true, 00:35:46.733 "reset": true, 00:35:46.733 "nvme_admin": false, 00:35:46.733 "nvme_io": false, 00:35:46.733 "nvme_io_md": false, 00:35:46.733 "write_zeroes": true, 00:35:46.733 "zcopy": true, 00:35:46.733 "get_zone_info": false, 00:35:46.733 "zone_management": false, 00:35:46.733 "zone_append": false, 00:35:46.733 "compare": false, 00:35:46.733 "compare_and_write": false, 00:35:46.733 "abort": true, 00:35:46.733 "seek_hole": false, 00:35:46.733 "seek_data": false, 00:35:46.733 "copy": true, 00:35:46.733 "nvme_iov_md": false 00:35:46.733 }, 00:35:46.733 "memory_domains": [ 00:35:46.733 { 00:35:46.733 "dma_device_id": "system", 00:35:46.733 "dma_device_type": 1 00:35:46.733 }, 00:35:46.733 { 00:35:46.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:46.733 "dma_device_type": 2 00:35:46.733 } 00:35:46.733 ], 00:35:46.733 "driver_specific": {} 00:35:46.733 } 00:35:46.733 ] 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:46.733 "name": "Existed_Raid", 00:35:46.733 "uuid": "797ca40a-ccb6-4bc4-a257-cd6d3dfb5cbb", 00:35:46.733 "strip_size_kb": 64, 00:35:46.733 "state": "configuring", 00:35:46.733 "raid_level": "concat", 00:35:46.733 "superblock": true, 00:35:46.733 "num_base_bdevs": 4, 00:35:46.733 "num_base_bdevs_discovered": 2, 00:35:46.733 "num_base_bdevs_operational": 4, 00:35:46.733 "base_bdevs_list": [ 00:35:46.733 { 00:35:46.733 "name": "BaseBdev1", 00:35:46.733 "uuid": "0879208a-4341-44bf-aad2-bc825a336797", 00:35:46.733 "is_configured": true, 00:35:46.733 "data_offset": 2048, 00:35:46.733 "data_size": 63488 00:35:46.733 }, 00:35:46.733 { 00:35:46.733 "name": "BaseBdev2", 00:35:46.733 "uuid": "ce6504b1-44b4-48f7-bb84-cd8246ee6225", 00:35:46.733 "is_configured": true, 00:35:46.733 "data_offset": 2048, 00:35:46.733 "data_size": 63488 00:35:46.733 }, 00:35:46.733 { 00:35:46.733 "name": "BaseBdev3", 00:35:46.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.733 "is_configured": false, 00:35:46.733 "data_offset": 0, 00:35:46.733 "data_size": 0 00:35:46.733 }, 00:35:46.733 { 00:35:46.733 "name": "BaseBdev4", 00:35:46.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.733 "is_configured": false, 00:35:46.733 "data_offset": 0, 00:35:46.733 "data_size": 0 00:35:46.733 } 00:35:46.733 ] 00:35:46.733 }' 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:46.733 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.993 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:46.993 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.993 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.993 [2024-12-09 23:17:27.627024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:46.993 BaseBdev3 00:35:47.252 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.252 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:35:47.252 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:47.252 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:47.252 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:47.252 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:47.252 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:47.252 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:47.252 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.253 [ 00:35:47.253 { 00:35:47.253 "name": "BaseBdev3", 00:35:47.253 "aliases": [ 00:35:47.253 "23b57e00-3da6-4033-a559-8cdf4ca76dc7" 00:35:47.253 ], 00:35:47.253 "product_name": "Malloc disk", 00:35:47.253 "block_size": 512, 00:35:47.253 "num_blocks": 65536, 00:35:47.253 "uuid": "23b57e00-3da6-4033-a559-8cdf4ca76dc7", 00:35:47.253 "assigned_rate_limits": { 00:35:47.253 "rw_ios_per_sec": 0, 00:35:47.253 "rw_mbytes_per_sec": 0, 00:35:47.253 "r_mbytes_per_sec": 0, 00:35:47.253 "w_mbytes_per_sec": 0 00:35:47.253 }, 00:35:47.253 "claimed": true, 00:35:47.253 "claim_type": "exclusive_write", 00:35:47.253 "zoned": false, 00:35:47.253 "supported_io_types": { 00:35:47.253 "read": true, 00:35:47.253 "write": true, 00:35:47.253 "unmap": true, 00:35:47.253 "flush": true, 00:35:47.253 "reset": true, 00:35:47.253 "nvme_admin": false, 00:35:47.253 "nvme_io": false, 00:35:47.253 "nvme_io_md": false, 00:35:47.253 "write_zeroes": true, 00:35:47.253 "zcopy": true, 00:35:47.253 "get_zone_info": false, 00:35:47.253 "zone_management": false, 00:35:47.253 "zone_append": false, 00:35:47.253 "compare": false, 00:35:47.253 "compare_and_write": false, 00:35:47.253 "abort": true, 00:35:47.253 "seek_hole": false, 00:35:47.253 "seek_data": false, 00:35:47.253 "copy": true, 00:35:47.253 "nvme_iov_md": false 00:35:47.253 }, 00:35:47.253 "memory_domains": [ 00:35:47.253 { 00:35:47.253 "dma_device_id": "system", 00:35:47.253 "dma_device_type": 1 00:35:47.253 }, 00:35:47.253 { 00:35:47.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.253 "dma_device_type": 2 00:35:47.253 } 00:35:47.253 ], 00:35:47.253 "driver_specific": {} 00:35:47.253 } 00:35:47.253 ] 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:47.253 "name": "Existed_Raid", 00:35:47.253 "uuid": "797ca40a-ccb6-4bc4-a257-cd6d3dfb5cbb", 00:35:47.253 "strip_size_kb": 64, 00:35:47.253 "state": "configuring", 00:35:47.253 "raid_level": "concat", 00:35:47.253 "superblock": true, 00:35:47.253 "num_base_bdevs": 4, 00:35:47.253 "num_base_bdevs_discovered": 3, 00:35:47.253 "num_base_bdevs_operational": 4, 00:35:47.253 "base_bdevs_list": [ 00:35:47.253 { 00:35:47.253 "name": "BaseBdev1", 00:35:47.253 "uuid": "0879208a-4341-44bf-aad2-bc825a336797", 00:35:47.253 "is_configured": true, 00:35:47.253 "data_offset": 2048, 00:35:47.253 "data_size": 63488 00:35:47.253 }, 00:35:47.253 { 00:35:47.253 "name": "BaseBdev2", 00:35:47.253 "uuid": "ce6504b1-44b4-48f7-bb84-cd8246ee6225", 00:35:47.253 "is_configured": true, 00:35:47.253 "data_offset": 2048, 00:35:47.253 "data_size": 63488 00:35:47.253 }, 00:35:47.253 { 00:35:47.253 "name": "BaseBdev3", 00:35:47.253 "uuid": "23b57e00-3da6-4033-a559-8cdf4ca76dc7", 00:35:47.253 "is_configured": true, 00:35:47.253 "data_offset": 2048, 00:35:47.253 "data_size": 63488 00:35:47.253 }, 00:35:47.253 { 00:35:47.253 "name": "BaseBdev4", 00:35:47.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.253 "is_configured": false, 00:35:47.253 "data_offset": 0, 00:35:47.253 "data_size": 0 00:35:47.253 } 00:35:47.253 ] 00:35:47.253 }' 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:47.253 23:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.512 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:47.512 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.513 [2024-12-09 23:17:28.074400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:47.513 [2024-12-09 23:17:28.074712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:47.513 [2024-12-09 23:17:28.074730] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:47.513 [2024-12-09 23:17:28.075071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:47.513 [2024-12-09 23:17:28.075273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:47.513 [2024-12-09 23:17:28.075300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:47.513 BaseBdev4 00:35:47.513 [2024-12-09 23:17:28.075513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.513 [ 00:35:47.513 { 00:35:47.513 "name": "BaseBdev4", 00:35:47.513 "aliases": [ 00:35:47.513 "75d5eeaa-46a2-4254-a004-8d95def1a96a" 00:35:47.513 ], 00:35:47.513 "product_name": "Malloc disk", 00:35:47.513 "block_size": 512, 00:35:47.513 "num_blocks": 65536, 00:35:47.513 "uuid": "75d5eeaa-46a2-4254-a004-8d95def1a96a", 00:35:47.513 "assigned_rate_limits": { 00:35:47.513 "rw_ios_per_sec": 0, 00:35:47.513 "rw_mbytes_per_sec": 0, 00:35:47.513 "r_mbytes_per_sec": 0, 00:35:47.513 "w_mbytes_per_sec": 0 00:35:47.513 }, 00:35:47.513 "claimed": true, 00:35:47.513 "claim_type": "exclusive_write", 00:35:47.513 "zoned": false, 00:35:47.513 "supported_io_types": { 00:35:47.513 "read": true, 00:35:47.513 "write": true, 00:35:47.513 "unmap": true, 00:35:47.513 "flush": true, 00:35:47.513 "reset": true, 00:35:47.513 "nvme_admin": false, 00:35:47.513 "nvme_io": false, 00:35:47.513 "nvme_io_md": false, 00:35:47.513 "write_zeroes": true, 00:35:47.513 "zcopy": true, 00:35:47.513 "get_zone_info": false, 00:35:47.513 "zone_management": false, 00:35:47.513 "zone_append": false, 00:35:47.513 "compare": false, 00:35:47.513 "compare_and_write": false, 00:35:47.513 "abort": true, 00:35:47.513 "seek_hole": false, 00:35:47.513 "seek_data": false, 00:35:47.513 "copy": true, 00:35:47.513 "nvme_iov_md": false 00:35:47.513 }, 00:35:47.513 "memory_domains": [ 00:35:47.513 { 00:35:47.513 "dma_device_id": "system", 00:35:47.513 "dma_device_type": 1 00:35:47.513 }, 00:35:47.513 { 00:35:47.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.513 "dma_device_type": 2 00:35:47.513 } 00:35:47.513 ], 00:35:47.513 "driver_specific": {} 00:35:47.513 } 00:35:47.513 ] 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.513 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.773 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.773 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:47.773 "name": "Existed_Raid", 00:35:47.773 "uuid": "797ca40a-ccb6-4bc4-a257-cd6d3dfb5cbb", 00:35:47.773 "strip_size_kb": 64, 00:35:47.773 "state": "online", 00:35:47.773 "raid_level": "concat", 00:35:47.773 "superblock": true, 00:35:47.773 "num_base_bdevs": 4, 00:35:47.773 "num_base_bdevs_discovered": 4, 00:35:47.773 "num_base_bdevs_operational": 4, 00:35:47.773 "base_bdevs_list": [ 00:35:47.773 { 00:35:47.773 "name": "BaseBdev1", 00:35:47.773 "uuid": "0879208a-4341-44bf-aad2-bc825a336797", 00:35:47.773 "is_configured": true, 00:35:47.773 "data_offset": 2048, 00:35:47.773 "data_size": 63488 00:35:47.773 }, 00:35:47.773 { 00:35:47.773 "name": "BaseBdev2", 00:35:47.773 "uuid": "ce6504b1-44b4-48f7-bb84-cd8246ee6225", 00:35:47.773 "is_configured": true, 00:35:47.773 "data_offset": 2048, 00:35:47.773 "data_size": 63488 00:35:47.773 }, 00:35:47.773 { 00:35:47.773 "name": "BaseBdev3", 00:35:47.773 "uuid": "23b57e00-3da6-4033-a559-8cdf4ca76dc7", 00:35:47.773 "is_configured": true, 00:35:47.773 "data_offset": 2048, 00:35:47.773 "data_size": 63488 00:35:47.773 }, 00:35:47.773 { 00:35:47.773 "name": "BaseBdev4", 00:35:47.773 "uuid": "75d5eeaa-46a2-4254-a004-8d95def1a96a", 00:35:47.773 "is_configured": true, 00:35:47.773 "data_offset": 2048, 00:35:47.773 "data_size": 63488 00:35:47.773 } 00:35:47.773 ] 00:35:47.773 }' 00:35:47.773 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:47.773 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.033 [2024-12-09 23:17:28.562580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:48.033 "name": "Existed_Raid", 00:35:48.033 "aliases": [ 00:35:48.033 "797ca40a-ccb6-4bc4-a257-cd6d3dfb5cbb" 00:35:48.033 ], 00:35:48.033 "product_name": "Raid Volume", 00:35:48.033 "block_size": 512, 00:35:48.033 "num_blocks": 253952, 00:35:48.033 "uuid": "797ca40a-ccb6-4bc4-a257-cd6d3dfb5cbb", 00:35:48.033 "assigned_rate_limits": { 00:35:48.033 "rw_ios_per_sec": 0, 00:35:48.033 "rw_mbytes_per_sec": 0, 00:35:48.033 "r_mbytes_per_sec": 0, 00:35:48.033 "w_mbytes_per_sec": 0 00:35:48.033 }, 00:35:48.033 "claimed": false, 00:35:48.033 "zoned": false, 00:35:48.033 "supported_io_types": { 00:35:48.033 "read": true, 00:35:48.033 "write": true, 00:35:48.033 "unmap": true, 00:35:48.033 "flush": true, 00:35:48.033 "reset": true, 00:35:48.033 "nvme_admin": false, 00:35:48.033 "nvme_io": false, 00:35:48.033 "nvme_io_md": false, 00:35:48.033 "write_zeroes": true, 00:35:48.033 "zcopy": false, 00:35:48.033 "get_zone_info": false, 00:35:48.033 "zone_management": false, 00:35:48.033 "zone_append": false, 00:35:48.033 "compare": false, 00:35:48.033 "compare_and_write": false, 00:35:48.033 "abort": false, 00:35:48.033 "seek_hole": false, 00:35:48.033 "seek_data": false, 00:35:48.033 "copy": false, 00:35:48.033 "nvme_iov_md": false 00:35:48.033 }, 00:35:48.033 "memory_domains": [ 00:35:48.033 { 00:35:48.033 "dma_device_id": "system", 00:35:48.033 "dma_device_type": 1 00:35:48.033 }, 00:35:48.033 { 00:35:48.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:48.033 "dma_device_type": 2 00:35:48.033 }, 00:35:48.033 { 00:35:48.033 "dma_device_id": "system", 00:35:48.033 "dma_device_type": 1 00:35:48.033 }, 00:35:48.033 { 00:35:48.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:48.033 "dma_device_type": 2 00:35:48.033 }, 00:35:48.033 { 00:35:48.033 "dma_device_id": "system", 00:35:48.033 "dma_device_type": 1 00:35:48.033 }, 00:35:48.033 { 00:35:48.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:48.033 "dma_device_type": 2 00:35:48.033 }, 00:35:48.033 { 00:35:48.033 "dma_device_id": "system", 00:35:48.033 "dma_device_type": 1 00:35:48.033 }, 00:35:48.033 { 00:35:48.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:48.033 "dma_device_type": 2 00:35:48.033 } 00:35:48.033 ], 00:35:48.033 "driver_specific": { 00:35:48.033 "raid": { 00:35:48.033 "uuid": "797ca40a-ccb6-4bc4-a257-cd6d3dfb5cbb", 00:35:48.033 "strip_size_kb": 64, 00:35:48.033 "state": "online", 00:35:48.033 "raid_level": "concat", 00:35:48.033 "superblock": true, 00:35:48.033 "num_base_bdevs": 4, 00:35:48.033 "num_base_bdevs_discovered": 4, 00:35:48.033 "num_base_bdevs_operational": 4, 00:35:48.033 "base_bdevs_list": [ 00:35:48.033 { 00:35:48.033 "name": "BaseBdev1", 00:35:48.033 "uuid": "0879208a-4341-44bf-aad2-bc825a336797", 00:35:48.033 "is_configured": true, 00:35:48.033 "data_offset": 2048, 00:35:48.033 "data_size": 63488 00:35:48.033 }, 00:35:48.033 { 00:35:48.033 "name": "BaseBdev2", 00:35:48.033 "uuid": "ce6504b1-44b4-48f7-bb84-cd8246ee6225", 00:35:48.033 "is_configured": true, 00:35:48.033 "data_offset": 2048, 00:35:48.033 "data_size": 63488 00:35:48.033 }, 00:35:48.033 { 00:35:48.033 "name": "BaseBdev3", 00:35:48.033 "uuid": "23b57e00-3da6-4033-a559-8cdf4ca76dc7", 00:35:48.033 "is_configured": true, 00:35:48.033 "data_offset": 2048, 00:35:48.033 "data_size": 63488 00:35:48.033 }, 00:35:48.033 { 00:35:48.033 "name": "BaseBdev4", 00:35:48.033 "uuid": "75d5eeaa-46a2-4254-a004-8d95def1a96a", 00:35:48.033 "is_configured": true, 00:35:48.033 "data_offset": 2048, 00:35:48.033 "data_size": 63488 00:35:48.033 } 00:35:48.033 ] 00:35:48.033 } 00:35:48.033 } 00:35:48.033 }' 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:48.033 BaseBdev2 00:35:48.033 BaseBdev3 00:35:48.033 BaseBdev4' 00:35:48.033 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:48.292 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.293 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:48.293 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:48.293 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:48.293 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.293 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.293 [2024-12-09 23:17:28.881852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:48.293 [2024-12-09 23:17:28.881896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:48.293 [2024-12-09 23:17:28.881959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.552 23:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.552 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.552 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:48.552 "name": "Existed_Raid", 00:35:48.552 "uuid": "797ca40a-ccb6-4bc4-a257-cd6d3dfb5cbb", 00:35:48.552 "strip_size_kb": 64, 00:35:48.552 "state": "offline", 00:35:48.552 "raid_level": "concat", 00:35:48.552 "superblock": true, 00:35:48.552 "num_base_bdevs": 4, 00:35:48.552 "num_base_bdevs_discovered": 3, 00:35:48.552 "num_base_bdevs_operational": 3, 00:35:48.552 "base_bdevs_list": [ 00:35:48.552 { 00:35:48.552 "name": null, 00:35:48.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.552 "is_configured": false, 00:35:48.552 "data_offset": 0, 00:35:48.552 "data_size": 63488 00:35:48.552 }, 00:35:48.552 { 00:35:48.552 "name": "BaseBdev2", 00:35:48.552 "uuid": "ce6504b1-44b4-48f7-bb84-cd8246ee6225", 00:35:48.552 "is_configured": true, 00:35:48.552 "data_offset": 2048, 00:35:48.552 "data_size": 63488 00:35:48.552 }, 00:35:48.552 { 00:35:48.552 "name": "BaseBdev3", 00:35:48.552 "uuid": "23b57e00-3da6-4033-a559-8cdf4ca76dc7", 00:35:48.552 "is_configured": true, 00:35:48.552 "data_offset": 2048, 00:35:48.552 "data_size": 63488 00:35:48.552 }, 00:35:48.552 { 00:35:48.552 "name": "BaseBdev4", 00:35:48.552 "uuid": "75d5eeaa-46a2-4254-a004-8d95def1a96a", 00:35:48.552 "is_configured": true, 00:35:48.552 "data_offset": 2048, 00:35:48.552 "data_size": 63488 00:35:48.552 } 00:35:48.552 ] 00:35:48.552 }' 00:35:48.552 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:48.552 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.812 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.812 [2024-12-09 23:17:29.414997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.071 [2024-12-09 23:17:29.577690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.071 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.334 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.334 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:49.334 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:49.334 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:35:49.334 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.334 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.334 [2024-12-09 23:17:29.723232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:49.334 [2024-12-09 23:17:29.723292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:49.334 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.335 BaseBdev2 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.335 [ 00:35:49.335 { 00:35:49.335 "name": "BaseBdev2", 00:35:49.335 "aliases": [ 00:35:49.335 "43ca8d80-9f36-47ad-9fae-1bc202adc5d2" 00:35:49.335 ], 00:35:49.335 "product_name": "Malloc disk", 00:35:49.335 "block_size": 512, 00:35:49.335 "num_blocks": 65536, 00:35:49.335 "uuid": "43ca8d80-9f36-47ad-9fae-1bc202adc5d2", 00:35:49.335 "assigned_rate_limits": { 00:35:49.335 "rw_ios_per_sec": 0, 00:35:49.335 "rw_mbytes_per_sec": 0, 00:35:49.335 "r_mbytes_per_sec": 0, 00:35:49.335 "w_mbytes_per_sec": 0 00:35:49.335 }, 00:35:49.335 "claimed": false, 00:35:49.335 "zoned": false, 00:35:49.335 "supported_io_types": { 00:35:49.335 "read": true, 00:35:49.335 "write": true, 00:35:49.335 "unmap": true, 00:35:49.335 "flush": true, 00:35:49.335 "reset": true, 00:35:49.335 "nvme_admin": false, 00:35:49.335 "nvme_io": false, 00:35:49.335 "nvme_io_md": false, 00:35:49.335 "write_zeroes": true, 00:35:49.335 "zcopy": true, 00:35:49.335 "get_zone_info": false, 00:35:49.335 "zone_management": false, 00:35:49.335 "zone_append": false, 00:35:49.335 "compare": false, 00:35:49.335 "compare_and_write": false, 00:35:49.335 "abort": true, 00:35:49.335 "seek_hole": false, 00:35:49.335 "seek_data": false, 00:35:49.335 "copy": true, 00:35:49.335 "nvme_iov_md": false 00:35:49.335 }, 00:35:49.335 "memory_domains": [ 00:35:49.335 { 00:35:49.335 "dma_device_id": "system", 00:35:49.335 "dma_device_type": 1 00:35:49.335 }, 00:35:49.335 { 00:35:49.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:49.335 "dma_device_type": 2 00:35:49.335 } 00:35:49.335 ], 00:35:49.335 "driver_specific": {} 00:35:49.335 } 00:35:49.335 ] 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.335 23:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.599 BaseBdev3 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.599 [ 00:35:49.599 { 00:35:49.599 "name": "BaseBdev3", 00:35:49.599 "aliases": [ 00:35:49.599 "3a8297f7-33de-4121-84fd-266baa0147d2" 00:35:49.599 ], 00:35:49.599 "product_name": "Malloc disk", 00:35:49.599 "block_size": 512, 00:35:49.599 "num_blocks": 65536, 00:35:49.599 "uuid": "3a8297f7-33de-4121-84fd-266baa0147d2", 00:35:49.599 "assigned_rate_limits": { 00:35:49.599 "rw_ios_per_sec": 0, 00:35:49.599 "rw_mbytes_per_sec": 0, 00:35:49.599 "r_mbytes_per_sec": 0, 00:35:49.599 "w_mbytes_per_sec": 0 00:35:49.599 }, 00:35:49.599 "claimed": false, 00:35:49.599 "zoned": false, 00:35:49.599 "supported_io_types": { 00:35:49.599 "read": true, 00:35:49.599 "write": true, 00:35:49.599 "unmap": true, 00:35:49.599 "flush": true, 00:35:49.599 "reset": true, 00:35:49.599 "nvme_admin": false, 00:35:49.599 "nvme_io": false, 00:35:49.599 "nvme_io_md": false, 00:35:49.599 "write_zeroes": true, 00:35:49.599 "zcopy": true, 00:35:49.599 "get_zone_info": false, 00:35:49.599 "zone_management": false, 00:35:49.599 "zone_append": false, 00:35:49.599 "compare": false, 00:35:49.599 "compare_and_write": false, 00:35:49.599 "abort": true, 00:35:49.599 "seek_hole": false, 00:35:49.599 "seek_data": false, 00:35:49.599 "copy": true, 00:35:49.599 "nvme_iov_md": false 00:35:49.599 }, 00:35:49.599 "memory_domains": [ 00:35:49.599 { 00:35:49.599 "dma_device_id": "system", 00:35:49.599 "dma_device_type": 1 00:35:49.599 }, 00:35:49.599 { 00:35:49.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:49.599 "dma_device_type": 2 00:35:49.599 } 00:35:49.599 ], 00:35:49.599 "driver_specific": {} 00:35:49.599 } 00:35:49.599 ] 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.599 BaseBdev4 00:35:49.599 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.600 [ 00:35:49.600 { 00:35:49.600 "name": "BaseBdev4", 00:35:49.600 "aliases": [ 00:35:49.600 "aad043cc-35f7-490e-ab76-eb9e643d100e" 00:35:49.600 ], 00:35:49.600 "product_name": "Malloc disk", 00:35:49.600 "block_size": 512, 00:35:49.600 "num_blocks": 65536, 00:35:49.600 "uuid": "aad043cc-35f7-490e-ab76-eb9e643d100e", 00:35:49.600 "assigned_rate_limits": { 00:35:49.600 "rw_ios_per_sec": 0, 00:35:49.600 "rw_mbytes_per_sec": 0, 00:35:49.600 "r_mbytes_per_sec": 0, 00:35:49.600 "w_mbytes_per_sec": 0 00:35:49.600 }, 00:35:49.600 "claimed": false, 00:35:49.600 "zoned": false, 00:35:49.600 "supported_io_types": { 00:35:49.600 "read": true, 00:35:49.600 "write": true, 00:35:49.600 "unmap": true, 00:35:49.600 "flush": true, 00:35:49.600 "reset": true, 00:35:49.600 "nvme_admin": false, 00:35:49.600 "nvme_io": false, 00:35:49.600 "nvme_io_md": false, 00:35:49.600 "write_zeroes": true, 00:35:49.600 "zcopy": true, 00:35:49.600 "get_zone_info": false, 00:35:49.600 "zone_management": false, 00:35:49.600 "zone_append": false, 00:35:49.600 "compare": false, 00:35:49.600 "compare_and_write": false, 00:35:49.600 "abort": true, 00:35:49.600 "seek_hole": false, 00:35:49.600 "seek_data": false, 00:35:49.600 "copy": true, 00:35:49.600 "nvme_iov_md": false 00:35:49.600 }, 00:35:49.600 "memory_domains": [ 00:35:49.600 { 00:35:49.600 "dma_device_id": "system", 00:35:49.600 "dma_device_type": 1 00:35:49.600 }, 00:35:49.600 { 00:35:49.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:49.600 "dma_device_type": 2 00:35:49.600 } 00:35:49.600 ], 00:35:49.600 "driver_specific": {} 00:35:49.600 } 00:35:49.600 ] 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.600 [2024-12-09 23:17:30.151177] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:49.600 [2024-12-09 23:17:30.151231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:49.600 [2024-12-09 23:17:30.151263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:49.600 [2024-12-09 23:17:30.153779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:49.600 [2024-12-09 23:17:30.153851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.600 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:49.600 "name": "Existed_Raid", 00:35:49.600 "uuid": "e82864e4-ac23-407a-ae86-fc234026b2e2", 00:35:49.600 "strip_size_kb": 64, 00:35:49.600 "state": "configuring", 00:35:49.600 "raid_level": "concat", 00:35:49.600 "superblock": true, 00:35:49.600 "num_base_bdevs": 4, 00:35:49.600 "num_base_bdevs_discovered": 3, 00:35:49.600 "num_base_bdevs_operational": 4, 00:35:49.600 "base_bdevs_list": [ 00:35:49.600 { 00:35:49.600 "name": "BaseBdev1", 00:35:49.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.600 "is_configured": false, 00:35:49.601 "data_offset": 0, 00:35:49.601 "data_size": 0 00:35:49.601 }, 00:35:49.601 { 00:35:49.601 "name": "BaseBdev2", 00:35:49.601 "uuid": "43ca8d80-9f36-47ad-9fae-1bc202adc5d2", 00:35:49.601 "is_configured": true, 00:35:49.601 "data_offset": 2048, 00:35:49.601 "data_size": 63488 00:35:49.601 }, 00:35:49.601 { 00:35:49.601 "name": "BaseBdev3", 00:35:49.601 "uuid": "3a8297f7-33de-4121-84fd-266baa0147d2", 00:35:49.601 "is_configured": true, 00:35:49.601 "data_offset": 2048, 00:35:49.601 "data_size": 63488 00:35:49.601 }, 00:35:49.601 { 00:35:49.601 "name": "BaseBdev4", 00:35:49.601 "uuid": "aad043cc-35f7-490e-ab76-eb9e643d100e", 00:35:49.601 "is_configured": true, 00:35:49.601 "data_offset": 2048, 00:35:49.601 "data_size": 63488 00:35:49.601 } 00:35:49.601 ] 00:35:49.601 }' 00:35:49.601 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:49.601 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.172 [2024-12-09 23:17:30.550649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:50.172 "name": "Existed_Raid", 00:35:50.172 "uuid": "e82864e4-ac23-407a-ae86-fc234026b2e2", 00:35:50.172 "strip_size_kb": 64, 00:35:50.172 "state": "configuring", 00:35:50.172 "raid_level": "concat", 00:35:50.172 "superblock": true, 00:35:50.172 "num_base_bdevs": 4, 00:35:50.172 "num_base_bdevs_discovered": 2, 00:35:50.172 "num_base_bdevs_operational": 4, 00:35:50.172 "base_bdevs_list": [ 00:35:50.172 { 00:35:50.172 "name": "BaseBdev1", 00:35:50.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:50.172 "is_configured": false, 00:35:50.172 "data_offset": 0, 00:35:50.172 "data_size": 0 00:35:50.172 }, 00:35:50.172 { 00:35:50.172 "name": null, 00:35:50.172 "uuid": "43ca8d80-9f36-47ad-9fae-1bc202adc5d2", 00:35:50.172 "is_configured": false, 00:35:50.172 "data_offset": 0, 00:35:50.172 "data_size": 63488 00:35:50.172 }, 00:35:50.172 { 00:35:50.172 "name": "BaseBdev3", 00:35:50.172 "uuid": "3a8297f7-33de-4121-84fd-266baa0147d2", 00:35:50.172 "is_configured": true, 00:35:50.172 "data_offset": 2048, 00:35:50.172 "data_size": 63488 00:35:50.172 }, 00:35:50.172 { 00:35:50.172 "name": "BaseBdev4", 00:35:50.172 "uuid": "aad043cc-35f7-490e-ab76-eb9e643d100e", 00:35:50.172 "is_configured": true, 00:35:50.172 "data_offset": 2048, 00:35:50.172 "data_size": 63488 00:35:50.172 } 00:35:50.172 ] 00:35:50.172 }' 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:50.172 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.430 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.430 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.430 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.430 23:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:50.430 23:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.430 [2024-12-09 23:17:31.054130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:50.430 BaseBdev1 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.430 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.689 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.689 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:50.689 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.689 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.689 [ 00:35:50.689 { 00:35:50.689 "name": "BaseBdev1", 00:35:50.689 "aliases": [ 00:35:50.689 "ca225fcf-2fa4-492b-a054-ef582ec1d812" 00:35:50.689 ], 00:35:50.689 "product_name": "Malloc disk", 00:35:50.689 "block_size": 512, 00:35:50.689 "num_blocks": 65536, 00:35:50.689 "uuid": "ca225fcf-2fa4-492b-a054-ef582ec1d812", 00:35:50.689 "assigned_rate_limits": { 00:35:50.689 "rw_ios_per_sec": 0, 00:35:50.689 "rw_mbytes_per_sec": 0, 00:35:50.689 "r_mbytes_per_sec": 0, 00:35:50.689 "w_mbytes_per_sec": 0 00:35:50.689 }, 00:35:50.689 "claimed": true, 00:35:50.689 "claim_type": "exclusive_write", 00:35:50.689 "zoned": false, 00:35:50.689 "supported_io_types": { 00:35:50.689 "read": true, 00:35:50.689 "write": true, 00:35:50.689 "unmap": true, 00:35:50.689 "flush": true, 00:35:50.689 "reset": true, 00:35:50.689 "nvme_admin": false, 00:35:50.689 "nvme_io": false, 00:35:50.689 "nvme_io_md": false, 00:35:50.689 "write_zeroes": true, 00:35:50.689 "zcopy": true, 00:35:50.689 "get_zone_info": false, 00:35:50.689 "zone_management": false, 00:35:50.689 "zone_append": false, 00:35:50.689 "compare": false, 00:35:50.689 "compare_and_write": false, 00:35:50.689 "abort": true, 00:35:50.689 "seek_hole": false, 00:35:50.689 "seek_data": false, 00:35:50.689 "copy": true, 00:35:50.689 "nvme_iov_md": false 00:35:50.689 }, 00:35:50.689 "memory_domains": [ 00:35:50.689 { 00:35:50.689 "dma_device_id": "system", 00:35:50.689 "dma_device_type": 1 00:35:50.689 }, 00:35:50.689 { 00:35:50.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:50.689 "dma_device_type": 2 00:35:50.689 } 00:35:50.689 ], 00:35:50.689 "driver_specific": {} 00:35:50.689 } 00:35:50.689 ] 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:50.690 "name": "Existed_Raid", 00:35:50.690 "uuid": "e82864e4-ac23-407a-ae86-fc234026b2e2", 00:35:50.690 "strip_size_kb": 64, 00:35:50.690 "state": "configuring", 00:35:50.690 "raid_level": "concat", 00:35:50.690 "superblock": true, 00:35:50.690 "num_base_bdevs": 4, 00:35:50.690 "num_base_bdevs_discovered": 3, 00:35:50.690 "num_base_bdevs_operational": 4, 00:35:50.690 "base_bdevs_list": [ 00:35:50.690 { 00:35:50.690 "name": "BaseBdev1", 00:35:50.690 "uuid": "ca225fcf-2fa4-492b-a054-ef582ec1d812", 00:35:50.690 "is_configured": true, 00:35:50.690 "data_offset": 2048, 00:35:50.690 "data_size": 63488 00:35:50.690 }, 00:35:50.690 { 00:35:50.690 "name": null, 00:35:50.690 "uuid": "43ca8d80-9f36-47ad-9fae-1bc202adc5d2", 00:35:50.690 "is_configured": false, 00:35:50.690 "data_offset": 0, 00:35:50.690 "data_size": 63488 00:35:50.690 }, 00:35:50.690 { 00:35:50.690 "name": "BaseBdev3", 00:35:50.690 "uuid": "3a8297f7-33de-4121-84fd-266baa0147d2", 00:35:50.690 "is_configured": true, 00:35:50.690 "data_offset": 2048, 00:35:50.690 "data_size": 63488 00:35:50.690 }, 00:35:50.690 { 00:35:50.690 "name": "BaseBdev4", 00:35:50.690 "uuid": "aad043cc-35f7-490e-ab76-eb9e643d100e", 00:35:50.690 "is_configured": true, 00:35:50.690 "data_offset": 2048, 00:35:50.690 "data_size": 63488 00:35:50.690 } 00:35:50.690 ] 00:35:50.690 }' 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:50.690 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.949 [2024-12-09 23:17:31.565746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.949 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.208 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.208 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:51.208 "name": "Existed_Raid", 00:35:51.208 "uuid": "e82864e4-ac23-407a-ae86-fc234026b2e2", 00:35:51.208 "strip_size_kb": 64, 00:35:51.208 "state": "configuring", 00:35:51.208 "raid_level": "concat", 00:35:51.208 "superblock": true, 00:35:51.208 "num_base_bdevs": 4, 00:35:51.208 "num_base_bdevs_discovered": 2, 00:35:51.208 "num_base_bdevs_operational": 4, 00:35:51.208 "base_bdevs_list": [ 00:35:51.208 { 00:35:51.208 "name": "BaseBdev1", 00:35:51.208 "uuid": "ca225fcf-2fa4-492b-a054-ef582ec1d812", 00:35:51.208 "is_configured": true, 00:35:51.208 "data_offset": 2048, 00:35:51.208 "data_size": 63488 00:35:51.208 }, 00:35:51.208 { 00:35:51.208 "name": null, 00:35:51.208 "uuid": "43ca8d80-9f36-47ad-9fae-1bc202adc5d2", 00:35:51.208 "is_configured": false, 00:35:51.208 "data_offset": 0, 00:35:51.208 "data_size": 63488 00:35:51.208 }, 00:35:51.208 { 00:35:51.208 "name": null, 00:35:51.208 "uuid": "3a8297f7-33de-4121-84fd-266baa0147d2", 00:35:51.208 "is_configured": false, 00:35:51.208 "data_offset": 0, 00:35:51.208 "data_size": 63488 00:35:51.208 }, 00:35:51.208 { 00:35:51.208 "name": "BaseBdev4", 00:35:51.208 "uuid": "aad043cc-35f7-490e-ab76-eb9e643d100e", 00:35:51.208 "is_configured": true, 00:35:51.208 "data_offset": 2048, 00:35:51.208 "data_size": 63488 00:35:51.208 } 00:35:51.208 ] 00:35:51.208 }' 00:35:51.208 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:51.208 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.468 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:51.468 23:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.468 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.468 23:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.468 [2024-12-09 23:17:32.037086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:51.468 "name": "Existed_Raid", 00:35:51.468 "uuid": "e82864e4-ac23-407a-ae86-fc234026b2e2", 00:35:51.468 "strip_size_kb": 64, 00:35:51.468 "state": "configuring", 00:35:51.468 "raid_level": "concat", 00:35:51.468 "superblock": true, 00:35:51.468 "num_base_bdevs": 4, 00:35:51.468 "num_base_bdevs_discovered": 3, 00:35:51.468 "num_base_bdevs_operational": 4, 00:35:51.468 "base_bdevs_list": [ 00:35:51.468 { 00:35:51.468 "name": "BaseBdev1", 00:35:51.468 "uuid": "ca225fcf-2fa4-492b-a054-ef582ec1d812", 00:35:51.468 "is_configured": true, 00:35:51.468 "data_offset": 2048, 00:35:51.468 "data_size": 63488 00:35:51.468 }, 00:35:51.468 { 00:35:51.468 "name": null, 00:35:51.468 "uuid": "43ca8d80-9f36-47ad-9fae-1bc202adc5d2", 00:35:51.468 "is_configured": false, 00:35:51.468 "data_offset": 0, 00:35:51.468 "data_size": 63488 00:35:51.468 }, 00:35:51.468 { 00:35:51.468 "name": "BaseBdev3", 00:35:51.468 "uuid": "3a8297f7-33de-4121-84fd-266baa0147d2", 00:35:51.468 "is_configured": true, 00:35:51.468 "data_offset": 2048, 00:35:51.468 "data_size": 63488 00:35:51.468 }, 00:35:51.468 { 00:35:51.468 "name": "BaseBdev4", 00:35:51.468 "uuid": "aad043cc-35f7-490e-ab76-eb9e643d100e", 00:35:51.468 "is_configured": true, 00:35:51.468 "data_offset": 2048, 00:35:51.468 "data_size": 63488 00:35:51.468 } 00:35:51.468 ] 00:35:51.468 }' 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:51.468 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.046 [2024-12-09 23:17:32.520487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:52.046 "name": "Existed_Raid", 00:35:52.046 "uuid": "e82864e4-ac23-407a-ae86-fc234026b2e2", 00:35:52.046 "strip_size_kb": 64, 00:35:52.046 "state": "configuring", 00:35:52.046 "raid_level": "concat", 00:35:52.046 "superblock": true, 00:35:52.046 "num_base_bdevs": 4, 00:35:52.046 "num_base_bdevs_discovered": 2, 00:35:52.046 "num_base_bdevs_operational": 4, 00:35:52.046 "base_bdevs_list": [ 00:35:52.046 { 00:35:52.046 "name": null, 00:35:52.046 "uuid": "ca225fcf-2fa4-492b-a054-ef582ec1d812", 00:35:52.046 "is_configured": false, 00:35:52.046 "data_offset": 0, 00:35:52.046 "data_size": 63488 00:35:52.046 }, 00:35:52.046 { 00:35:52.046 "name": null, 00:35:52.046 "uuid": "43ca8d80-9f36-47ad-9fae-1bc202adc5d2", 00:35:52.046 "is_configured": false, 00:35:52.046 "data_offset": 0, 00:35:52.046 "data_size": 63488 00:35:52.046 }, 00:35:52.046 { 00:35:52.046 "name": "BaseBdev3", 00:35:52.046 "uuid": "3a8297f7-33de-4121-84fd-266baa0147d2", 00:35:52.046 "is_configured": true, 00:35:52.046 "data_offset": 2048, 00:35:52.046 "data_size": 63488 00:35:52.046 }, 00:35:52.046 { 00:35:52.046 "name": "BaseBdev4", 00:35:52.046 "uuid": "aad043cc-35f7-490e-ab76-eb9e643d100e", 00:35:52.046 "is_configured": true, 00:35:52.046 "data_offset": 2048, 00:35:52.046 "data_size": 63488 00:35:52.046 } 00:35:52.046 ] 00:35:52.046 }' 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:52.046 23:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.614 [2024-12-09 23:17:33.119026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:52.614 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:52.615 "name": "Existed_Raid", 00:35:52.615 "uuid": "e82864e4-ac23-407a-ae86-fc234026b2e2", 00:35:52.615 "strip_size_kb": 64, 00:35:52.615 "state": "configuring", 00:35:52.615 "raid_level": "concat", 00:35:52.615 "superblock": true, 00:35:52.615 "num_base_bdevs": 4, 00:35:52.615 "num_base_bdevs_discovered": 3, 00:35:52.615 "num_base_bdevs_operational": 4, 00:35:52.615 "base_bdevs_list": [ 00:35:52.615 { 00:35:52.615 "name": null, 00:35:52.615 "uuid": "ca225fcf-2fa4-492b-a054-ef582ec1d812", 00:35:52.615 "is_configured": false, 00:35:52.615 "data_offset": 0, 00:35:52.615 "data_size": 63488 00:35:52.615 }, 00:35:52.615 { 00:35:52.615 "name": "BaseBdev2", 00:35:52.615 "uuid": "43ca8d80-9f36-47ad-9fae-1bc202adc5d2", 00:35:52.615 "is_configured": true, 00:35:52.615 "data_offset": 2048, 00:35:52.615 "data_size": 63488 00:35:52.615 }, 00:35:52.615 { 00:35:52.615 "name": "BaseBdev3", 00:35:52.615 "uuid": "3a8297f7-33de-4121-84fd-266baa0147d2", 00:35:52.615 "is_configured": true, 00:35:52.615 "data_offset": 2048, 00:35:52.615 "data_size": 63488 00:35:52.615 }, 00:35:52.615 { 00:35:52.615 "name": "BaseBdev4", 00:35:52.615 "uuid": "aad043cc-35f7-490e-ab76-eb9e643d100e", 00:35:52.615 "is_configured": true, 00:35:52.615 "data_offset": 2048, 00:35:52.615 "data_size": 63488 00:35:52.615 } 00:35:52.615 ] 00:35:52.615 }' 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:52.615 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ca225fcf-2fa4-492b-a054-ef582ec1d812 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.182 [2024-12-09 23:17:33.683689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:53.182 [2024-12-09 23:17:33.683960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:53.182 [2024-12-09 23:17:33.683976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:53.182 [2024-12-09 23:17:33.684304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:53.182 [2024-12-09 23:17:33.684497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:53.182 [2024-12-09 23:17:33.684519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:35:53.182 [2024-12-09 23:17:33.684672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:53.182 NewBaseBdev 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.182 [ 00:35:53.182 { 00:35:53.182 "name": "NewBaseBdev", 00:35:53.182 "aliases": [ 00:35:53.182 "ca225fcf-2fa4-492b-a054-ef582ec1d812" 00:35:53.182 ], 00:35:53.182 "product_name": "Malloc disk", 00:35:53.182 "block_size": 512, 00:35:53.182 "num_blocks": 65536, 00:35:53.182 "uuid": "ca225fcf-2fa4-492b-a054-ef582ec1d812", 00:35:53.182 "assigned_rate_limits": { 00:35:53.182 "rw_ios_per_sec": 0, 00:35:53.182 "rw_mbytes_per_sec": 0, 00:35:53.182 "r_mbytes_per_sec": 0, 00:35:53.182 "w_mbytes_per_sec": 0 00:35:53.182 }, 00:35:53.182 "claimed": true, 00:35:53.182 "claim_type": "exclusive_write", 00:35:53.182 "zoned": false, 00:35:53.182 "supported_io_types": { 00:35:53.182 "read": true, 00:35:53.182 "write": true, 00:35:53.182 "unmap": true, 00:35:53.182 "flush": true, 00:35:53.182 "reset": true, 00:35:53.182 "nvme_admin": false, 00:35:53.182 "nvme_io": false, 00:35:53.182 "nvme_io_md": false, 00:35:53.182 "write_zeroes": true, 00:35:53.182 "zcopy": true, 00:35:53.182 "get_zone_info": false, 00:35:53.182 "zone_management": false, 00:35:53.182 "zone_append": false, 00:35:53.182 "compare": false, 00:35:53.182 "compare_and_write": false, 00:35:53.182 "abort": true, 00:35:53.182 "seek_hole": false, 00:35:53.182 "seek_data": false, 00:35:53.182 "copy": true, 00:35:53.182 "nvme_iov_md": false 00:35:53.182 }, 00:35:53.182 "memory_domains": [ 00:35:53.182 { 00:35:53.182 "dma_device_id": "system", 00:35:53.182 "dma_device_type": 1 00:35:53.182 }, 00:35:53.182 { 00:35:53.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.182 "dma_device_type": 2 00:35:53.182 } 00:35:53.182 ], 00:35:53.182 "driver_specific": {} 00:35:53.182 } 00:35:53.182 ] 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:53.182 "name": "Existed_Raid", 00:35:53.182 "uuid": "e82864e4-ac23-407a-ae86-fc234026b2e2", 00:35:53.182 "strip_size_kb": 64, 00:35:53.182 "state": "online", 00:35:53.182 "raid_level": "concat", 00:35:53.182 "superblock": true, 00:35:53.182 "num_base_bdevs": 4, 00:35:53.182 "num_base_bdevs_discovered": 4, 00:35:53.182 "num_base_bdevs_operational": 4, 00:35:53.182 "base_bdevs_list": [ 00:35:53.182 { 00:35:53.182 "name": "NewBaseBdev", 00:35:53.182 "uuid": "ca225fcf-2fa4-492b-a054-ef582ec1d812", 00:35:53.182 "is_configured": true, 00:35:53.182 "data_offset": 2048, 00:35:53.182 "data_size": 63488 00:35:53.182 }, 00:35:53.182 { 00:35:53.182 "name": "BaseBdev2", 00:35:53.182 "uuid": "43ca8d80-9f36-47ad-9fae-1bc202adc5d2", 00:35:53.182 "is_configured": true, 00:35:53.182 "data_offset": 2048, 00:35:53.182 "data_size": 63488 00:35:53.182 }, 00:35:53.182 { 00:35:53.182 "name": "BaseBdev3", 00:35:53.182 "uuid": "3a8297f7-33de-4121-84fd-266baa0147d2", 00:35:53.182 "is_configured": true, 00:35:53.182 "data_offset": 2048, 00:35:53.182 "data_size": 63488 00:35:53.182 }, 00:35:53.182 { 00:35:53.182 "name": "BaseBdev4", 00:35:53.182 "uuid": "aad043cc-35f7-490e-ab76-eb9e643d100e", 00:35:53.182 "is_configured": true, 00:35:53.182 "data_offset": 2048, 00:35:53.182 "data_size": 63488 00:35:53.182 } 00:35:53.182 ] 00:35:53.182 }' 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:53.182 23:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.750 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:35:53.750 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:53.750 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:53.750 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:53.750 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:53.750 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:53.750 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:53.751 [2024-12-09 23:17:34.115534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:53.751 "name": "Existed_Raid", 00:35:53.751 "aliases": [ 00:35:53.751 "e82864e4-ac23-407a-ae86-fc234026b2e2" 00:35:53.751 ], 00:35:53.751 "product_name": "Raid Volume", 00:35:53.751 "block_size": 512, 00:35:53.751 "num_blocks": 253952, 00:35:53.751 "uuid": "e82864e4-ac23-407a-ae86-fc234026b2e2", 00:35:53.751 "assigned_rate_limits": { 00:35:53.751 "rw_ios_per_sec": 0, 00:35:53.751 "rw_mbytes_per_sec": 0, 00:35:53.751 "r_mbytes_per_sec": 0, 00:35:53.751 "w_mbytes_per_sec": 0 00:35:53.751 }, 00:35:53.751 "claimed": false, 00:35:53.751 "zoned": false, 00:35:53.751 "supported_io_types": { 00:35:53.751 "read": true, 00:35:53.751 "write": true, 00:35:53.751 "unmap": true, 00:35:53.751 "flush": true, 00:35:53.751 "reset": true, 00:35:53.751 "nvme_admin": false, 00:35:53.751 "nvme_io": false, 00:35:53.751 "nvme_io_md": false, 00:35:53.751 "write_zeroes": true, 00:35:53.751 "zcopy": false, 00:35:53.751 "get_zone_info": false, 00:35:53.751 "zone_management": false, 00:35:53.751 "zone_append": false, 00:35:53.751 "compare": false, 00:35:53.751 "compare_and_write": false, 00:35:53.751 "abort": false, 00:35:53.751 "seek_hole": false, 00:35:53.751 "seek_data": false, 00:35:53.751 "copy": false, 00:35:53.751 "nvme_iov_md": false 00:35:53.751 }, 00:35:53.751 "memory_domains": [ 00:35:53.751 { 00:35:53.751 "dma_device_id": "system", 00:35:53.751 "dma_device_type": 1 00:35:53.751 }, 00:35:53.751 { 00:35:53.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.751 "dma_device_type": 2 00:35:53.751 }, 00:35:53.751 { 00:35:53.751 "dma_device_id": "system", 00:35:53.751 "dma_device_type": 1 00:35:53.751 }, 00:35:53.751 { 00:35:53.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.751 "dma_device_type": 2 00:35:53.751 }, 00:35:53.751 { 00:35:53.751 "dma_device_id": "system", 00:35:53.751 "dma_device_type": 1 00:35:53.751 }, 00:35:53.751 { 00:35:53.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.751 "dma_device_type": 2 00:35:53.751 }, 00:35:53.751 { 00:35:53.751 "dma_device_id": "system", 00:35:53.751 "dma_device_type": 1 00:35:53.751 }, 00:35:53.751 { 00:35:53.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.751 "dma_device_type": 2 00:35:53.751 } 00:35:53.751 ], 00:35:53.751 "driver_specific": { 00:35:53.751 "raid": { 00:35:53.751 "uuid": "e82864e4-ac23-407a-ae86-fc234026b2e2", 00:35:53.751 "strip_size_kb": 64, 00:35:53.751 "state": "online", 00:35:53.751 "raid_level": "concat", 00:35:53.751 "superblock": true, 00:35:53.751 "num_base_bdevs": 4, 00:35:53.751 "num_base_bdevs_discovered": 4, 00:35:53.751 "num_base_bdevs_operational": 4, 00:35:53.751 "base_bdevs_list": [ 00:35:53.751 { 00:35:53.751 "name": "NewBaseBdev", 00:35:53.751 "uuid": "ca225fcf-2fa4-492b-a054-ef582ec1d812", 00:35:53.751 "is_configured": true, 00:35:53.751 "data_offset": 2048, 00:35:53.751 "data_size": 63488 00:35:53.751 }, 00:35:53.751 { 00:35:53.751 "name": "BaseBdev2", 00:35:53.751 "uuid": "43ca8d80-9f36-47ad-9fae-1bc202adc5d2", 00:35:53.751 "is_configured": true, 00:35:53.751 "data_offset": 2048, 00:35:53.751 "data_size": 63488 00:35:53.751 }, 00:35:53.751 { 00:35:53.751 "name": "BaseBdev3", 00:35:53.751 "uuid": "3a8297f7-33de-4121-84fd-266baa0147d2", 00:35:53.751 "is_configured": true, 00:35:53.751 "data_offset": 2048, 00:35:53.751 "data_size": 63488 00:35:53.751 }, 00:35:53.751 { 00:35:53.751 "name": "BaseBdev4", 00:35:53.751 "uuid": "aad043cc-35f7-490e-ab76-eb9e643d100e", 00:35:53.751 "is_configured": true, 00:35:53.751 "data_offset": 2048, 00:35:53.751 "data_size": 63488 00:35:53.751 } 00:35:53.751 ] 00:35:53.751 } 00:35:53.751 } 00:35:53.751 }' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:35:53.751 BaseBdev2 00:35:53.751 BaseBdev3 00:35:53.751 BaseBdev4' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:53.751 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:54.010 [2024-12-09 23:17:34.410774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:54.010 [2024-12-09 23:17:34.410815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:54.010 [2024-12-09 23:17:34.410900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:54.010 [2024-12-09 23:17:34.410972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:54.010 [2024-12-09 23:17:34.410984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71819 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71819 ']' 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71819 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71819 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:54.010 killing process with pid 71819 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71819' 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71819 00:35:54.010 [2024-12-09 23:17:34.455994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:54.010 23:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71819 00:35:54.268 [2024-12-09 23:17:34.881959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:55.651 23:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:35:55.651 00:35:55.651 real 0m11.527s 00:35:55.651 user 0m18.244s 00:35:55.651 sys 0m2.339s 00:35:55.651 23:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.651 23:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:55.651 ************************************ 00:35:55.651 END TEST raid_state_function_test_sb 00:35:55.651 ************************************ 00:35:55.651 23:17:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:35:55.651 23:17:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:55.651 23:17:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:55.651 23:17:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:55.651 ************************************ 00:35:55.651 START TEST raid_superblock_test 00:35:55.651 ************************************ 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72489 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72489 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:35:55.651 23:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72489 ']' 00:35:55.652 23:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.652 23:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.652 23:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.652 23:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.652 23:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.652 [2024-12-09 23:17:36.234286] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:35:55.652 [2024-12-09 23:17:36.234439] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72489 ] 00:35:55.910 [2024-12-09 23:17:36.417851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.910 [2024-12-09 23:17:36.537755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.167 [2024-12-09 23:17:36.753587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:56.167 [2024-12-09 23:17:36.753629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.735 malloc1 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.735 [2024-12-09 23:17:37.157380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:56.735 [2024-12-09 23:17:37.157455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.735 [2024-12-09 23:17:37.157480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:56.735 [2024-12-09 23:17:37.157492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.735 [2024-12-09 23:17:37.160056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.735 [2024-12-09 23:17:37.160097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:56.735 pt1 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.735 malloc2 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.735 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.735 [2024-12-09 23:17:37.214707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:56.735 [2024-12-09 23:17:37.214777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.735 [2024-12-09 23:17:37.214805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:56.735 [2024-12-09 23:17:37.214817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.736 [2024-12-09 23:17:37.217356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.736 [2024-12-09 23:17:37.217417] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:56.736 pt2 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.736 malloc3 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.736 [2024-12-09 23:17:37.282885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:56.736 [2024-12-09 23:17:37.282946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.736 [2024-12-09 23:17:37.282972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:56.736 [2024-12-09 23:17:37.282984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.736 [2024-12-09 23:17:37.285472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.736 [2024-12-09 23:17:37.285512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:56.736 pt3 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.736 malloc4 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.736 [2024-12-09 23:17:37.339520] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:56.736 [2024-12-09 23:17:37.339601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.736 [2024-12-09 23:17:37.339627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:56.736 [2024-12-09 23:17:37.339639] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.736 [2024-12-09 23:17:37.342055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.736 [2024-12-09 23:17:37.342094] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:56.736 pt4 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.736 [2024-12-09 23:17:37.351541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:56.736 [2024-12-09 23:17:37.353637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:56.736 [2024-12-09 23:17:37.353726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:56.736 [2024-12-09 23:17:37.353771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:56.736 [2024-12-09 23:17:37.353954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:56.736 [2024-12-09 23:17:37.353967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:56.736 [2024-12-09 23:17:37.354240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:56.736 [2024-12-09 23:17:37.354440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:56.736 [2024-12-09 23:17:37.354457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:56.736 [2024-12-09 23:17:37.354630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.736 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.995 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.995 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:56.995 "name": "raid_bdev1", 00:35:56.995 "uuid": "57b73536-4f44-482c-a9b7-f3cc15e51c22", 00:35:56.995 "strip_size_kb": 64, 00:35:56.995 "state": "online", 00:35:56.995 "raid_level": "concat", 00:35:56.995 "superblock": true, 00:35:56.995 "num_base_bdevs": 4, 00:35:56.995 "num_base_bdevs_discovered": 4, 00:35:56.995 "num_base_bdevs_operational": 4, 00:35:56.995 "base_bdevs_list": [ 00:35:56.995 { 00:35:56.995 "name": "pt1", 00:35:56.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:56.995 "is_configured": true, 00:35:56.995 "data_offset": 2048, 00:35:56.995 "data_size": 63488 00:35:56.995 }, 00:35:56.995 { 00:35:56.995 "name": "pt2", 00:35:56.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:56.995 "is_configured": true, 00:35:56.995 "data_offset": 2048, 00:35:56.995 "data_size": 63488 00:35:56.995 }, 00:35:56.995 { 00:35:56.995 "name": "pt3", 00:35:56.995 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:56.995 "is_configured": true, 00:35:56.995 "data_offset": 2048, 00:35:56.995 "data_size": 63488 00:35:56.995 }, 00:35:56.995 { 00:35:56.995 "name": "pt4", 00:35:56.995 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:56.995 "is_configured": true, 00:35:56.995 "data_offset": 2048, 00:35:56.995 "data_size": 63488 00:35:56.995 } 00:35:56.995 ] 00:35:56.995 }' 00:35:56.995 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:56.995 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.254 [2024-12-09 23:17:37.835166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:57.254 "name": "raid_bdev1", 00:35:57.254 "aliases": [ 00:35:57.254 "57b73536-4f44-482c-a9b7-f3cc15e51c22" 00:35:57.254 ], 00:35:57.254 "product_name": "Raid Volume", 00:35:57.254 "block_size": 512, 00:35:57.254 "num_blocks": 253952, 00:35:57.254 "uuid": "57b73536-4f44-482c-a9b7-f3cc15e51c22", 00:35:57.254 "assigned_rate_limits": { 00:35:57.254 "rw_ios_per_sec": 0, 00:35:57.254 "rw_mbytes_per_sec": 0, 00:35:57.254 "r_mbytes_per_sec": 0, 00:35:57.254 "w_mbytes_per_sec": 0 00:35:57.254 }, 00:35:57.254 "claimed": false, 00:35:57.254 "zoned": false, 00:35:57.254 "supported_io_types": { 00:35:57.254 "read": true, 00:35:57.254 "write": true, 00:35:57.254 "unmap": true, 00:35:57.254 "flush": true, 00:35:57.254 "reset": true, 00:35:57.254 "nvme_admin": false, 00:35:57.254 "nvme_io": false, 00:35:57.254 "nvme_io_md": false, 00:35:57.254 "write_zeroes": true, 00:35:57.254 "zcopy": false, 00:35:57.254 "get_zone_info": false, 00:35:57.254 "zone_management": false, 00:35:57.254 "zone_append": false, 00:35:57.254 "compare": false, 00:35:57.254 "compare_and_write": false, 00:35:57.254 "abort": false, 00:35:57.254 "seek_hole": false, 00:35:57.254 "seek_data": false, 00:35:57.254 "copy": false, 00:35:57.254 "nvme_iov_md": false 00:35:57.254 }, 00:35:57.254 "memory_domains": [ 00:35:57.254 { 00:35:57.254 "dma_device_id": "system", 00:35:57.254 "dma_device_type": 1 00:35:57.254 }, 00:35:57.254 { 00:35:57.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:57.254 "dma_device_type": 2 00:35:57.254 }, 00:35:57.254 { 00:35:57.254 "dma_device_id": "system", 00:35:57.254 "dma_device_type": 1 00:35:57.254 }, 00:35:57.254 { 00:35:57.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:57.254 "dma_device_type": 2 00:35:57.254 }, 00:35:57.254 { 00:35:57.254 "dma_device_id": "system", 00:35:57.254 "dma_device_type": 1 00:35:57.254 }, 00:35:57.254 { 00:35:57.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:57.254 "dma_device_type": 2 00:35:57.254 }, 00:35:57.254 { 00:35:57.254 "dma_device_id": "system", 00:35:57.254 "dma_device_type": 1 00:35:57.254 }, 00:35:57.254 { 00:35:57.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:57.254 "dma_device_type": 2 00:35:57.254 } 00:35:57.254 ], 00:35:57.254 "driver_specific": { 00:35:57.254 "raid": { 00:35:57.254 "uuid": "57b73536-4f44-482c-a9b7-f3cc15e51c22", 00:35:57.254 "strip_size_kb": 64, 00:35:57.254 "state": "online", 00:35:57.254 "raid_level": "concat", 00:35:57.254 "superblock": true, 00:35:57.254 "num_base_bdevs": 4, 00:35:57.254 "num_base_bdevs_discovered": 4, 00:35:57.254 "num_base_bdevs_operational": 4, 00:35:57.254 "base_bdevs_list": [ 00:35:57.254 { 00:35:57.254 "name": "pt1", 00:35:57.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:57.254 "is_configured": true, 00:35:57.254 "data_offset": 2048, 00:35:57.254 "data_size": 63488 00:35:57.254 }, 00:35:57.254 { 00:35:57.254 "name": "pt2", 00:35:57.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:57.254 "is_configured": true, 00:35:57.254 "data_offset": 2048, 00:35:57.254 "data_size": 63488 00:35:57.254 }, 00:35:57.254 { 00:35:57.254 "name": "pt3", 00:35:57.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:57.254 "is_configured": true, 00:35:57.254 "data_offset": 2048, 00:35:57.254 "data_size": 63488 00:35:57.254 }, 00:35:57.254 { 00:35:57.254 "name": "pt4", 00:35:57.254 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:57.254 "is_configured": true, 00:35:57.254 "data_offset": 2048, 00:35:57.254 "data_size": 63488 00:35:57.254 } 00:35:57.254 ] 00:35:57.254 } 00:35:57.254 } 00:35:57.254 }' 00:35:57.254 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:57.514 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:57.514 pt2 00:35:57.514 pt3 00:35:57.514 pt4' 00:35:57.514 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.514 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:57.514 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.514 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.514 23:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:57.514 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.514 23:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.514 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.773 [2024-12-09 23:17:38.150798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=57b73536-4f44-482c-a9b7-f3cc15e51c22 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 57b73536-4f44-482c-a9b7-f3cc15e51c22 ']' 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.773 [2024-12-09 23:17:38.194452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:57.773 [2024-12-09 23:17:38.194613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:57.773 [2024-12-09 23:17:38.194754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:57.773 [2024-12-09 23:17:38.194840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:57.773 [2024-12-09 23:17:38.194862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:57.773 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.774 [2024-12-09 23:17:38.354248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:57.774 [2024-12-09 23:17:38.356616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:57.774 [2024-12-09 23:17:38.356808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:35:57.774 [2024-12-09 23:17:38.356870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:35:57.774 [2024-12-09 23:17:38.356932] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:57.774 [2024-12-09 23:17:38.356988] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:57.774 [2024-12-09 23:17:38.357011] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:35:57.774 [2024-12-09 23:17:38.357033] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:35:57.774 [2024-12-09 23:17:38.357050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:57.774 [2024-12-09 23:17:38.357069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:35:57.774 request: 00:35:57.774 { 00:35:57.774 "name": "raid_bdev1", 00:35:57.774 "raid_level": "concat", 00:35:57.774 "base_bdevs": [ 00:35:57.774 "malloc1", 00:35:57.774 "malloc2", 00:35:57.774 "malloc3", 00:35:57.774 "malloc4" 00:35:57.774 ], 00:35:57.774 "strip_size_kb": 64, 00:35:57.774 "superblock": false, 00:35:57.774 "method": "bdev_raid_create", 00:35:57.774 "req_id": 1 00:35:57.774 } 00:35:57.774 Got JSON-RPC error response 00:35:57.774 response: 00:35:57.774 { 00:35:57.774 "code": -17, 00:35:57.774 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:57.774 } 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.774 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.033 [2024-12-09 23:17:38.422126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:58.033 [2024-12-09 23:17:38.422337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.033 [2024-12-09 23:17:38.422370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:58.033 [2024-12-09 23:17:38.422389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.033 [2024-12-09 23:17:38.424906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.033 [2024-12-09 23:17:38.424953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:58.033 [2024-12-09 23:17:38.425043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:58.033 [2024-12-09 23:17:38.425109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:58.033 pt1 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:58.033 "name": "raid_bdev1", 00:35:58.033 "uuid": "57b73536-4f44-482c-a9b7-f3cc15e51c22", 00:35:58.033 "strip_size_kb": 64, 00:35:58.033 "state": "configuring", 00:35:58.033 "raid_level": "concat", 00:35:58.033 "superblock": true, 00:35:58.033 "num_base_bdevs": 4, 00:35:58.033 "num_base_bdevs_discovered": 1, 00:35:58.033 "num_base_bdevs_operational": 4, 00:35:58.033 "base_bdevs_list": [ 00:35:58.033 { 00:35:58.033 "name": "pt1", 00:35:58.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:58.033 "is_configured": true, 00:35:58.033 "data_offset": 2048, 00:35:58.033 "data_size": 63488 00:35:58.033 }, 00:35:58.033 { 00:35:58.033 "name": null, 00:35:58.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:58.033 "is_configured": false, 00:35:58.033 "data_offset": 2048, 00:35:58.033 "data_size": 63488 00:35:58.033 }, 00:35:58.033 { 00:35:58.033 "name": null, 00:35:58.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:58.033 "is_configured": false, 00:35:58.033 "data_offset": 2048, 00:35:58.033 "data_size": 63488 00:35:58.033 }, 00:35:58.033 { 00:35:58.033 "name": null, 00:35:58.033 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:58.033 "is_configured": false, 00:35:58.033 "data_offset": 2048, 00:35:58.033 "data_size": 63488 00:35:58.033 } 00:35:58.033 ] 00:35:58.033 }' 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:58.033 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 [2024-12-09 23:17:38.877522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:58.292 [2024-12-09 23:17:38.877765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.292 [2024-12-09 23:17:38.877807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:58.292 [2024-12-09 23:17:38.877826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.292 [2024-12-09 23:17:38.878320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.292 [2024-12-09 23:17:38.878373] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:58.292 [2024-12-09 23:17:38.878483] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:58.292 [2024-12-09 23:17:38.878517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:58.292 pt2 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 [2024-12-09 23:17:38.889508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.292 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.550 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.550 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:58.550 "name": "raid_bdev1", 00:35:58.550 "uuid": "57b73536-4f44-482c-a9b7-f3cc15e51c22", 00:35:58.550 "strip_size_kb": 64, 00:35:58.550 "state": "configuring", 00:35:58.550 "raid_level": "concat", 00:35:58.550 "superblock": true, 00:35:58.550 "num_base_bdevs": 4, 00:35:58.550 "num_base_bdevs_discovered": 1, 00:35:58.550 "num_base_bdevs_operational": 4, 00:35:58.550 "base_bdevs_list": [ 00:35:58.550 { 00:35:58.550 "name": "pt1", 00:35:58.550 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:58.550 "is_configured": true, 00:35:58.550 "data_offset": 2048, 00:35:58.550 "data_size": 63488 00:35:58.550 }, 00:35:58.550 { 00:35:58.550 "name": null, 00:35:58.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:58.550 "is_configured": false, 00:35:58.550 "data_offset": 0, 00:35:58.550 "data_size": 63488 00:35:58.550 }, 00:35:58.550 { 00:35:58.550 "name": null, 00:35:58.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:58.550 "is_configured": false, 00:35:58.550 "data_offset": 2048, 00:35:58.550 "data_size": 63488 00:35:58.550 }, 00:35:58.550 { 00:35:58.550 "name": null, 00:35:58.550 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:58.550 "is_configured": false, 00:35:58.550 "data_offset": 2048, 00:35:58.550 "data_size": 63488 00:35:58.550 } 00:35:58.550 ] 00:35:58.550 }' 00:35:58.550 23:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:58.550 23:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.809 [2024-12-09 23:17:39.292901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:58.809 [2024-12-09 23:17:39.292975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.809 [2024-12-09 23:17:39.292999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:35:58.809 [2024-12-09 23:17:39.293012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.809 [2024-12-09 23:17:39.293496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.809 [2024-12-09 23:17:39.293516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:58.809 [2024-12-09 23:17:39.293621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:58.809 [2024-12-09 23:17:39.293659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:58.809 pt2 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.809 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.810 [2024-12-09 23:17:39.304874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:58.810 [2024-12-09 23:17:39.304936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.810 [2024-12-09 23:17:39.304959] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:58.810 [2024-12-09 23:17:39.304970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.810 [2024-12-09 23:17:39.305384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.810 [2024-12-09 23:17:39.305425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:58.810 [2024-12-09 23:17:39.305505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:58.810 [2024-12-09 23:17:39.305534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:58.810 pt3 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.810 [2024-12-09 23:17:39.312822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:58.810 [2024-12-09 23:17:39.312875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.810 [2024-12-09 23:17:39.312897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:35:58.810 [2024-12-09 23:17:39.312908] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.810 [2024-12-09 23:17:39.313329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.810 [2024-12-09 23:17:39.313347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:58.810 [2024-12-09 23:17:39.313442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:58.810 [2024-12-09 23:17:39.313469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:58.810 [2024-12-09 23:17:39.313619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:58.810 [2024-12-09 23:17:39.313638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:58.810 [2024-12-09 23:17:39.313930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:58.810 [2024-12-09 23:17:39.314090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:58.810 [2024-12-09 23:17:39.314106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:58.810 [2024-12-09 23:17:39.314252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:58.810 pt4 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:58.810 "name": "raid_bdev1", 00:35:58.810 "uuid": "57b73536-4f44-482c-a9b7-f3cc15e51c22", 00:35:58.810 "strip_size_kb": 64, 00:35:58.810 "state": "online", 00:35:58.810 "raid_level": "concat", 00:35:58.810 "superblock": true, 00:35:58.810 "num_base_bdevs": 4, 00:35:58.810 "num_base_bdevs_discovered": 4, 00:35:58.810 "num_base_bdevs_operational": 4, 00:35:58.810 "base_bdevs_list": [ 00:35:58.810 { 00:35:58.810 "name": "pt1", 00:35:58.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:58.810 "is_configured": true, 00:35:58.810 "data_offset": 2048, 00:35:58.810 "data_size": 63488 00:35:58.810 }, 00:35:58.810 { 00:35:58.810 "name": "pt2", 00:35:58.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:58.810 "is_configured": true, 00:35:58.810 "data_offset": 2048, 00:35:58.810 "data_size": 63488 00:35:58.810 }, 00:35:58.810 { 00:35:58.810 "name": "pt3", 00:35:58.810 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:58.810 "is_configured": true, 00:35:58.810 "data_offset": 2048, 00:35:58.810 "data_size": 63488 00:35:58.810 }, 00:35:58.810 { 00:35:58.810 "name": "pt4", 00:35:58.810 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:58.810 "is_configured": true, 00:35:58.810 "data_offset": 2048, 00:35:58.810 "data_size": 63488 00:35:58.810 } 00:35:58.810 ] 00:35:58.810 }' 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:58.810 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.379 [2024-12-09 23:17:39.760624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:59.379 "name": "raid_bdev1", 00:35:59.379 "aliases": [ 00:35:59.379 "57b73536-4f44-482c-a9b7-f3cc15e51c22" 00:35:59.379 ], 00:35:59.379 "product_name": "Raid Volume", 00:35:59.379 "block_size": 512, 00:35:59.379 "num_blocks": 253952, 00:35:59.379 "uuid": "57b73536-4f44-482c-a9b7-f3cc15e51c22", 00:35:59.379 "assigned_rate_limits": { 00:35:59.379 "rw_ios_per_sec": 0, 00:35:59.379 "rw_mbytes_per_sec": 0, 00:35:59.379 "r_mbytes_per_sec": 0, 00:35:59.379 "w_mbytes_per_sec": 0 00:35:59.379 }, 00:35:59.379 "claimed": false, 00:35:59.379 "zoned": false, 00:35:59.379 "supported_io_types": { 00:35:59.379 "read": true, 00:35:59.379 "write": true, 00:35:59.379 "unmap": true, 00:35:59.379 "flush": true, 00:35:59.379 "reset": true, 00:35:59.379 "nvme_admin": false, 00:35:59.379 "nvme_io": false, 00:35:59.379 "nvme_io_md": false, 00:35:59.379 "write_zeroes": true, 00:35:59.379 "zcopy": false, 00:35:59.379 "get_zone_info": false, 00:35:59.379 "zone_management": false, 00:35:59.379 "zone_append": false, 00:35:59.379 "compare": false, 00:35:59.379 "compare_and_write": false, 00:35:59.379 "abort": false, 00:35:59.379 "seek_hole": false, 00:35:59.379 "seek_data": false, 00:35:59.379 "copy": false, 00:35:59.379 "nvme_iov_md": false 00:35:59.379 }, 00:35:59.379 "memory_domains": [ 00:35:59.379 { 00:35:59.379 "dma_device_id": "system", 00:35:59.379 "dma_device_type": 1 00:35:59.379 }, 00:35:59.379 { 00:35:59.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.379 "dma_device_type": 2 00:35:59.379 }, 00:35:59.379 { 00:35:59.379 "dma_device_id": "system", 00:35:59.379 "dma_device_type": 1 00:35:59.379 }, 00:35:59.379 { 00:35:59.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.379 "dma_device_type": 2 00:35:59.379 }, 00:35:59.379 { 00:35:59.379 "dma_device_id": "system", 00:35:59.379 "dma_device_type": 1 00:35:59.379 }, 00:35:59.379 { 00:35:59.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.379 "dma_device_type": 2 00:35:59.379 }, 00:35:59.379 { 00:35:59.379 "dma_device_id": "system", 00:35:59.379 "dma_device_type": 1 00:35:59.379 }, 00:35:59.379 { 00:35:59.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.379 "dma_device_type": 2 00:35:59.379 } 00:35:59.379 ], 00:35:59.379 "driver_specific": { 00:35:59.379 "raid": { 00:35:59.379 "uuid": "57b73536-4f44-482c-a9b7-f3cc15e51c22", 00:35:59.379 "strip_size_kb": 64, 00:35:59.379 "state": "online", 00:35:59.379 "raid_level": "concat", 00:35:59.379 "superblock": true, 00:35:59.379 "num_base_bdevs": 4, 00:35:59.379 "num_base_bdevs_discovered": 4, 00:35:59.379 "num_base_bdevs_operational": 4, 00:35:59.379 "base_bdevs_list": [ 00:35:59.379 { 00:35:59.379 "name": "pt1", 00:35:59.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:59.379 "is_configured": true, 00:35:59.379 "data_offset": 2048, 00:35:59.379 "data_size": 63488 00:35:59.379 }, 00:35:59.379 { 00:35:59.379 "name": "pt2", 00:35:59.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:59.379 "is_configured": true, 00:35:59.379 "data_offset": 2048, 00:35:59.379 "data_size": 63488 00:35:59.379 }, 00:35:59.379 { 00:35:59.379 "name": "pt3", 00:35:59.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:59.379 "is_configured": true, 00:35:59.379 "data_offset": 2048, 00:35:59.379 "data_size": 63488 00:35:59.379 }, 00:35:59.379 { 00:35:59.379 "name": "pt4", 00:35:59.379 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:59.379 "is_configured": true, 00:35:59.379 "data_offset": 2048, 00:35:59.379 "data_size": 63488 00:35:59.379 } 00:35:59.379 ] 00:35:59.379 } 00:35:59.379 } 00:35:59.379 }' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:59.379 pt2 00:35:59.379 pt3 00:35:59.379 pt4' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.379 23:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.641 [2024-12-09 23:17:40.104085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 57b73536-4f44-482c-a9b7-f3cc15e51c22 '!=' 57b73536-4f44-482c-a9b7-f3cc15e51c22 ']' 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72489 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72489 ']' 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72489 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72489 00:35:59.641 killing process with pid 72489 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72489' 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72489 00:35:59.641 [2024-12-09 23:17:40.192730] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:59.641 [2024-12-09 23:17:40.192823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:59.641 23:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72489 00:35:59.641 [2024-12-09 23:17:40.192901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:59.641 [2024-12-09 23:17:40.192912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:36:00.208 [2024-12-09 23:17:40.609047] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:01.153 23:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:36:01.153 00:36:01.153 real 0m5.636s 00:36:01.153 user 0m8.009s 00:36:01.153 sys 0m1.156s 00:36:01.153 23:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.153 ************************************ 00:36:01.153 END TEST raid_superblock_test 00:36:01.153 ************************************ 00:36:01.153 23:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.413 23:17:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:36:01.413 23:17:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:01.413 23:17:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:01.413 23:17:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:01.413 ************************************ 00:36:01.413 START TEST raid_read_error_test 00:36:01.413 ************************************ 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:36:01.413 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zlEBqv73We 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72748 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72748 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72748 ']' 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:01.414 23:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.414 [2024-12-09 23:17:41.968873] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:01.414 [2024-12-09 23:17:41.969006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72748 ] 00:36:01.672 [2024-12-09 23:17:42.152624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.672 [2024-12-09 23:17:42.275822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.933 [2024-12-09 23:17:42.480437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:01.933 [2024-12-09 23:17:42.480507] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.504 BaseBdev1_malloc 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.504 true 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.504 [2024-12-09 23:17:42.888068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:36:02.504 [2024-12-09 23:17:42.888131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:02.504 [2024-12-09 23:17:42.888157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:02.504 [2024-12-09 23:17:42.888172] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:02.504 [2024-12-09 23:17:42.890707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:02.504 [2024-12-09 23:17:42.890752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:02.504 BaseBdev1 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.504 BaseBdev2_malloc 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.504 true 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.504 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.504 [2024-12-09 23:17:42.959862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:36:02.505 [2024-12-09 23:17:42.960079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:02.505 [2024-12-09 23:17:42.960195] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:02.505 [2024-12-09 23:17:42.960288] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:02.505 [2024-12-09 23:17:42.963223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:02.505 [2024-12-09 23:17:42.963404] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:02.505 BaseBdev2 00:36:02.505 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.505 23:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:02.505 23:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:02.505 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.505 23:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.505 BaseBdev3_malloc 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.505 true 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.505 [2024-12-09 23:17:43.041386] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:36:02.505 [2024-12-09 23:17:43.041459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:02.505 [2024-12-09 23:17:43.041481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:02.505 [2024-12-09 23:17:43.041495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:02.505 [2024-12-09 23:17:43.043992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:02.505 [2024-12-09 23:17:43.044036] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:02.505 BaseBdev3 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.505 BaseBdev4_malloc 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.505 true 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.505 [2024-12-09 23:17:43.099735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:36:02.505 [2024-12-09 23:17:43.099799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:02.505 [2024-12-09 23:17:43.099821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:02.505 [2024-12-09 23:17:43.099835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:02.505 [2024-12-09 23:17:43.102273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:02.505 [2024-12-09 23:17:43.102323] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:36:02.505 BaseBdev4 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.505 [2024-12-09 23:17:43.107802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:02.505 [2024-12-09 23:17:43.109996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:02.505 [2024-12-09 23:17:43.110076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:02.505 [2024-12-09 23:17:43.110144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:02.505 [2024-12-09 23:17:43.110387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:36:02.505 [2024-12-09 23:17:43.110418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:36:02.505 [2024-12-09 23:17:43.110690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:36:02.505 [2024-12-09 23:17:43.110845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:36:02.505 [2024-12-09 23:17:43.110859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:36:02.505 [2024-12-09 23:17:43.111040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.505 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.769 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.769 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:02.769 "name": "raid_bdev1", 00:36:02.769 "uuid": "1e36f321-7dc1-4e1e-ab54-ea1879aa63d6", 00:36:02.769 "strip_size_kb": 64, 00:36:02.769 "state": "online", 00:36:02.769 "raid_level": "concat", 00:36:02.769 "superblock": true, 00:36:02.769 "num_base_bdevs": 4, 00:36:02.769 "num_base_bdevs_discovered": 4, 00:36:02.769 "num_base_bdevs_operational": 4, 00:36:02.769 "base_bdevs_list": [ 00:36:02.769 { 00:36:02.769 "name": "BaseBdev1", 00:36:02.769 "uuid": "1caea6a5-cc7b-5c21-9a24-b2b867eac0e8", 00:36:02.769 "is_configured": true, 00:36:02.769 "data_offset": 2048, 00:36:02.769 "data_size": 63488 00:36:02.769 }, 00:36:02.769 { 00:36:02.769 "name": "BaseBdev2", 00:36:02.769 "uuid": "7e0b1f36-d6c9-5ef7-9df1-f28c90b0d5fa", 00:36:02.769 "is_configured": true, 00:36:02.769 "data_offset": 2048, 00:36:02.770 "data_size": 63488 00:36:02.770 }, 00:36:02.770 { 00:36:02.770 "name": "BaseBdev3", 00:36:02.770 "uuid": "a9061a32-c2e4-5fa0-af29-308ed4a73a1b", 00:36:02.770 "is_configured": true, 00:36:02.770 "data_offset": 2048, 00:36:02.770 "data_size": 63488 00:36:02.770 }, 00:36:02.770 { 00:36:02.770 "name": "BaseBdev4", 00:36:02.770 "uuid": "174c1d14-9ad5-5c59-9ab1-365f97d06d6c", 00:36:02.770 "is_configured": true, 00:36:02.770 "data_offset": 2048, 00:36:02.770 "data_size": 63488 00:36:02.770 } 00:36:02.770 ] 00:36:02.770 }' 00:36:02.770 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:02.770 23:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.029 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:36:03.029 23:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:03.029 [2024-12-09 23:17:43.660388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.965 23:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.224 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:04.224 "name": "raid_bdev1", 00:36:04.224 "uuid": "1e36f321-7dc1-4e1e-ab54-ea1879aa63d6", 00:36:04.224 "strip_size_kb": 64, 00:36:04.224 "state": "online", 00:36:04.224 "raid_level": "concat", 00:36:04.224 "superblock": true, 00:36:04.224 "num_base_bdevs": 4, 00:36:04.224 "num_base_bdevs_discovered": 4, 00:36:04.224 "num_base_bdevs_operational": 4, 00:36:04.224 "base_bdevs_list": [ 00:36:04.224 { 00:36:04.224 "name": "BaseBdev1", 00:36:04.224 "uuid": "1caea6a5-cc7b-5c21-9a24-b2b867eac0e8", 00:36:04.224 "is_configured": true, 00:36:04.224 "data_offset": 2048, 00:36:04.224 "data_size": 63488 00:36:04.224 }, 00:36:04.224 { 00:36:04.224 "name": "BaseBdev2", 00:36:04.224 "uuid": "7e0b1f36-d6c9-5ef7-9df1-f28c90b0d5fa", 00:36:04.224 "is_configured": true, 00:36:04.224 "data_offset": 2048, 00:36:04.224 "data_size": 63488 00:36:04.224 }, 00:36:04.224 { 00:36:04.224 "name": "BaseBdev3", 00:36:04.224 "uuid": "a9061a32-c2e4-5fa0-af29-308ed4a73a1b", 00:36:04.224 "is_configured": true, 00:36:04.224 "data_offset": 2048, 00:36:04.224 "data_size": 63488 00:36:04.224 }, 00:36:04.224 { 00:36:04.224 "name": "BaseBdev4", 00:36:04.224 "uuid": "174c1d14-9ad5-5c59-9ab1-365f97d06d6c", 00:36:04.224 "is_configured": true, 00:36:04.224 "data_offset": 2048, 00:36:04.224 "data_size": 63488 00:36:04.224 } 00:36:04.224 ] 00:36:04.224 }' 00:36:04.224 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:04.224 23:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.483 23:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:04.483 23:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.483 23:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.483 [2024-12-09 23:17:44.997226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:04.483 [2024-12-09 23:17:44.997267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:04.483 [2024-12-09 23:17:45.000200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:04.483 [2024-12-09 23:17:45.000270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:04.483 [2024-12-09 23:17:45.000317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:04.483 [2024-12-09 23:17:45.000332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:36:04.483 { 00:36:04.483 "results": [ 00:36:04.483 { 00:36:04.483 "job": "raid_bdev1", 00:36:04.483 "core_mask": "0x1", 00:36:04.483 "workload": "randrw", 00:36:04.483 "percentage": 50, 00:36:04.483 "status": "finished", 00:36:04.483 "queue_depth": 1, 00:36:04.483 "io_size": 131072, 00:36:04.483 "runtime": 1.336913, 00:36:04.483 "iops": 15134.118674887595, 00:36:04.483 "mibps": 1891.7648343609494, 00:36:04.483 "io_failed": 1, 00:36:04.483 "io_timeout": 0, 00:36:04.483 "avg_latency_us": 91.1131351937353, 00:36:04.483 "min_latency_us": 27.142168674698794, 00:36:04.483 "max_latency_us": 1441.0024096385541 00:36:04.483 } 00:36:04.483 ], 00:36:04.483 "core_count": 1 00:36:04.483 } 00:36:04.483 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72748 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72748 ']' 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72748 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72748 00:36:04.484 killing process with pid 72748 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72748' 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72748 00:36:04.484 [2024-12-09 23:17:45.040955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:04.484 23:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72748 00:36:04.742 [2024-12-09 23:17:45.377001] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:06.116 23:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:36:06.116 23:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:36:06.116 23:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zlEBqv73We 00:36:06.116 23:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:36:06.116 ************************************ 00:36:06.116 END TEST raid_read_error_test 00:36:06.116 ************************************ 00:36:06.116 23:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:36:06.116 23:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:06.116 23:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:06.116 23:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:36:06.116 00:36:06.116 real 0m4.768s 00:36:06.116 user 0m5.563s 00:36:06.116 sys 0m0.616s 00:36:06.116 23:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.116 23:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.116 23:17:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:36:06.116 23:17:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:06.116 23:17:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.116 23:17:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:06.116 ************************************ 00:36:06.117 START TEST raid_write_error_test 00:36:06.117 ************************************ 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ddwN4hTGjt 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72898 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72898 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72898 ']' 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:06.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:06.117 23:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.375 [2024-12-09 23:17:46.819854] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:06.376 [2024-12-09 23:17:46.819985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72898 ] 00:36:06.376 [2024-12-09 23:17:47.000068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.633 [2024-12-09 23:17:47.125904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.891 [2024-12-09 23:17:47.347910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:06.891 [2024-12-09 23:17:47.347981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.150 BaseBdev1_malloc 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.150 true 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.150 [2024-12-09 23:17:47.745003] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:36:07.150 [2024-12-09 23:17:47.745064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:07.150 [2024-12-09 23:17:47.745090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:07.150 [2024-12-09 23:17:47.745104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:07.150 [2024-12-09 23:17:47.747828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:07.150 [2024-12-09 23:17:47.747876] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:07.150 BaseBdev1 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.150 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 BaseBdev2_malloc 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 true 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 [2024-12-09 23:17:47.815681] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:36:07.409 [2024-12-09 23:17:47.815746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:07.409 [2024-12-09 23:17:47.815767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:07.409 [2024-12-09 23:17:47.815782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:07.409 [2024-12-09 23:17:47.818366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:07.409 [2024-12-09 23:17:47.818429] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:07.409 BaseBdev2 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 BaseBdev3_malloc 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 true 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 [2024-12-09 23:17:47.891454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:36:07.409 [2024-12-09 23:17:47.891515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:07.409 [2024-12-09 23:17:47.891537] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:07.409 [2024-12-09 23:17:47.891553] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:07.409 [2024-12-09 23:17:47.894122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:07.409 [2024-12-09 23:17:47.894171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:07.409 BaseBdev3 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 BaseBdev4_malloc 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 true 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 [2024-12-09 23:17:47.961838] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:36:07.409 [2024-12-09 23:17:47.961899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:07.409 [2024-12-09 23:17:47.961921] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:07.409 [2024-12-09 23:17:47.961935] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:07.409 [2024-12-09 23:17:47.964457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:07.409 [2024-12-09 23:17:47.964638] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:36:07.409 BaseBdev4 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 [2024-12-09 23:17:47.973901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:07.409 [2024-12-09 23:17:47.976169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:07.409 [2024-12-09 23:17:47.976430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:07.409 [2024-12-09 23:17:47.976516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:07.409 [2024-12-09 23:17:47.976764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:36:07.409 [2024-12-09 23:17:47.976783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:36:07.409 [2024-12-09 23:17:47.977055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:36:07.409 [2024-12-09 23:17:47.977223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:36:07.409 [2024-12-09 23:17:47.977237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:36:07.409 [2024-12-09 23:17:47.977428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.409 23:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.409 23:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.409 23:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:07.409 "name": "raid_bdev1", 00:36:07.409 "uuid": "e7b10ff1-e21f-43fa-ba8f-46268a54e6e8", 00:36:07.409 "strip_size_kb": 64, 00:36:07.410 "state": "online", 00:36:07.410 "raid_level": "concat", 00:36:07.410 "superblock": true, 00:36:07.410 "num_base_bdevs": 4, 00:36:07.410 "num_base_bdevs_discovered": 4, 00:36:07.410 "num_base_bdevs_operational": 4, 00:36:07.410 "base_bdevs_list": [ 00:36:07.410 { 00:36:07.410 "name": "BaseBdev1", 00:36:07.410 "uuid": "1b91ea0f-76d5-57dc-89ae-a2d1085d451c", 00:36:07.410 "is_configured": true, 00:36:07.410 "data_offset": 2048, 00:36:07.410 "data_size": 63488 00:36:07.410 }, 00:36:07.410 { 00:36:07.410 "name": "BaseBdev2", 00:36:07.410 "uuid": "b9adbfce-dc9a-5161-9e05-89cc296a0529", 00:36:07.410 "is_configured": true, 00:36:07.410 "data_offset": 2048, 00:36:07.410 "data_size": 63488 00:36:07.410 }, 00:36:07.410 { 00:36:07.410 "name": "BaseBdev3", 00:36:07.410 "uuid": "f05f7e53-e59a-5067-9fa8-dd6f92108d8a", 00:36:07.410 "is_configured": true, 00:36:07.410 "data_offset": 2048, 00:36:07.410 "data_size": 63488 00:36:07.410 }, 00:36:07.410 { 00:36:07.410 "name": "BaseBdev4", 00:36:07.410 "uuid": "a7e5ab1c-8577-514a-8d01-40430b499b18", 00:36:07.410 "is_configured": true, 00:36:07.410 "data_offset": 2048, 00:36:07.410 "data_size": 63488 00:36:07.410 } 00:36:07.410 ] 00:36:07.410 }' 00:36:07.410 23:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:07.410 23:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.979 23:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:07.979 23:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:36:07.979 [2024-12-09 23:17:48.522602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:08.915 "name": "raid_bdev1", 00:36:08.915 "uuid": "e7b10ff1-e21f-43fa-ba8f-46268a54e6e8", 00:36:08.915 "strip_size_kb": 64, 00:36:08.915 "state": "online", 00:36:08.915 "raid_level": "concat", 00:36:08.915 "superblock": true, 00:36:08.915 "num_base_bdevs": 4, 00:36:08.915 "num_base_bdevs_discovered": 4, 00:36:08.915 "num_base_bdevs_operational": 4, 00:36:08.915 "base_bdevs_list": [ 00:36:08.915 { 00:36:08.915 "name": "BaseBdev1", 00:36:08.915 "uuid": "1b91ea0f-76d5-57dc-89ae-a2d1085d451c", 00:36:08.915 "is_configured": true, 00:36:08.915 "data_offset": 2048, 00:36:08.915 "data_size": 63488 00:36:08.915 }, 00:36:08.915 { 00:36:08.915 "name": "BaseBdev2", 00:36:08.915 "uuid": "b9adbfce-dc9a-5161-9e05-89cc296a0529", 00:36:08.915 "is_configured": true, 00:36:08.915 "data_offset": 2048, 00:36:08.915 "data_size": 63488 00:36:08.915 }, 00:36:08.915 { 00:36:08.915 "name": "BaseBdev3", 00:36:08.915 "uuid": "f05f7e53-e59a-5067-9fa8-dd6f92108d8a", 00:36:08.915 "is_configured": true, 00:36:08.915 "data_offset": 2048, 00:36:08.915 "data_size": 63488 00:36:08.915 }, 00:36:08.915 { 00:36:08.915 "name": "BaseBdev4", 00:36:08.915 "uuid": "a7e5ab1c-8577-514a-8d01-40430b499b18", 00:36:08.915 "is_configured": true, 00:36:08.915 "data_offset": 2048, 00:36:08.915 "data_size": 63488 00:36:08.915 } 00:36:08.915 ] 00:36:08.915 }' 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:08.915 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:09.484 [2024-12-09 23:17:49.863359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:09.484 [2024-12-09 23:17:49.863398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:09.484 [2024-12-09 23:17:49.866232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:09.484 [2024-12-09 23:17:49.866311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:09.484 [2024-12-09 23:17:49.866361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:09.484 [2024-12-09 23:17:49.866380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:36:09.484 { 00:36:09.484 "results": [ 00:36:09.484 { 00:36:09.484 "job": "raid_bdev1", 00:36:09.484 "core_mask": "0x1", 00:36:09.484 "workload": "randrw", 00:36:09.484 "percentage": 50, 00:36:09.484 "status": "finished", 00:36:09.484 "queue_depth": 1, 00:36:09.484 "io_size": 131072, 00:36:09.484 "runtime": 1.340638, 00:36:09.484 "iops": 15109.224115682235, 00:36:09.484 "mibps": 1888.6530144602793, 00:36:09.484 "io_failed": 1, 00:36:09.484 "io_timeout": 0, 00:36:09.484 "avg_latency_us": 91.35988317192351, 00:36:09.484 "min_latency_us": 27.347791164658634, 00:36:09.484 "max_latency_us": 1480.4819277108434 00:36:09.484 } 00:36:09.484 ], 00:36:09.484 "core_count": 1 00:36:09.484 } 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72898 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72898 ']' 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72898 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72898 00:36:09.484 killing process with pid 72898 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72898' 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72898 00:36:09.484 [2024-12-09 23:17:49.916386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:09.484 23:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72898 00:36:09.743 [2024-12-09 23:17:50.266233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:11.117 23:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ddwN4hTGjt 00:36:11.117 23:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:36:11.117 23:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:36:11.117 23:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:36:11.117 23:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:36:11.117 23:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:11.117 23:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:11.117 23:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:36:11.117 00:36:11.117 real 0m4.810s 00:36:11.117 user 0m5.655s 00:36:11.117 sys 0m0.646s 00:36:11.117 23:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.117 ************************************ 00:36:11.117 END TEST raid_write_error_test 00:36:11.117 ************************************ 00:36:11.117 23:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:11.117 23:17:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:36:11.117 23:17:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:36:11.118 23:17:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:11.118 23:17:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.118 23:17:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:11.118 ************************************ 00:36:11.118 START TEST raid_state_function_test 00:36:11.118 ************************************ 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:36:11.118 Process raid pid: 73044 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73044 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73044' 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73044 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73044 ']' 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:11.118 23:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:11.118 [2024-12-09 23:17:51.687323] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:11.118 [2024-12-09 23:17:51.687466] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:11.377 [2024-12-09 23:17:51.868550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.377 [2024-12-09 23:17:51.993308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.634 [2024-12-09 23:17:52.214612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:11.634 [2024-12-09 23:17:52.214647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.203 [2024-12-09 23:17:52.559595] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:12.203 [2024-12-09 23:17:52.559660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:12.203 [2024-12-09 23:17:52.559673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:12.203 [2024-12-09 23:17:52.559685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:12.203 [2024-12-09 23:17:52.559693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:12.203 [2024-12-09 23:17:52.559722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:12.203 [2024-12-09 23:17:52.559737] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:12.203 [2024-12-09 23:17:52.559750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:12.203 "name": "Existed_Raid", 00:36:12.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.203 "strip_size_kb": 0, 00:36:12.203 "state": "configuring", 00:36:12.203 "raid_level": "raid1", 00:36:12.203 "superblock": false, 00:36:12.203 "num_base_bdevs": 4, 00:36:12.203 "num_base_bdevs_discovered": 0, 00:36:12.203 "num_base_bdevs_operational": 4, 00:36:12.203 "base_bdevs_list": [ 00:36:12.203 { 00:36:12.203 "name": "BaseBdev1", 00:36:12.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.203 "is_configured": false, 00:36:12.203 "data_offset": 0, 00:36:12.203 "data_size": 0 00:36:12.203 }, 00:36:12.203 { 00:36:12.203 "name": "BaseBdev2", 00:36:12.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.203 "is_configured": false, 00:36:12.203 "data_offset": 0, 00:36:12.203 "data_size": 0 00:36:12.203 }, 00:36:12.203 { 00:36:12.203 "name": "BaseBdev3", 00:36:12.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.203 "is_configured": false, 00:36:12.203 "data_offset": 0, 00:36:12.203 "data_size": 0 00:36:12.203 }, 00:36:12.203 { 00:36:12.203 "name": "BaseBdev4", 00:36:12.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.203 "is_configured": false, 00:36:12.203 "data_offset": 0, 00:36:12.203 "data_size": 0 00:36:12.203 } 00:36:12.203 ] 00:36:12.203 }' 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:12.203 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.462 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:12.462 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.462 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.462 [2024-12-09 23:17:52.990998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:12.462 [2024-12-09 23:17:52.991046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:12.462 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.462 23:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:12.462 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.462 23:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.462 [2024-12-09 23:17:53.002979] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:12.462 [2024-12-09 23:17:53.003034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:12.462 [2024-12-09 23:17:53.003046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:12.463 [2024-12-09 23:17:53.003060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:12.463 [2024-12-09 23:17:53.003069] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:12.463 [2024-12-09 23:17:53.003081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:12.463 [2024-12-09 23:17:53.003089] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:12.463 [2024-12-09 23:17:53.003102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.463 [2024-12-09 23:17:53.053983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:12.463 BaseBdev1 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.463 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.463 [ 00:36:12.463 { 00:36:12.463 "name": "BaseBdev1", 00:36:12.463 "aliases": [ 00:36:12.463 "ec0a9307-639c-48e1-a8eb-2b177207916a" 00:36:12.463 ], 00:36:12.463 "product_name": "Malloc disk", 00:36:12.463 "block_size": 512, 00:36:12.463 "num_blocks": 65536, 00:36:12.463 "uuid": "ec0a9307-639c-48e1-a8eb-2b177207916a", 00:36:12.463 "assigned_rate_limits": { 00:36:12.463 "rw_ios_per_sec": 0, 00:36:12.463 "rw_mbytes_per_sec": 0, 00:36:12.463 "r_mbytes_per_sec": 0, 00:36:12.463 "w_mbytes_per_sec": 0 00:36:12.463 }, 00:36:12.463 "claimed": true, 00:36:12.463 "claim_type": "exclusive_write", 00:36:12.463 "zoned": false, 00:36:12.463 "supported_io_types": { 00:36:12.463 "read": true, 00:36:12.463 "write": true, 00:36:12.463 "unmap": true, 00:36:12.463 "flush": true, 00:36:12.463 "reset": true, 00:36:12.463 "nvme_admin": false, 00:36:12.463 "nvme_io": false, 00:36:12.463 "nvme_io_md": false, 00:36:12.463 "write_zeroes": true, 00:36:12.463 "zcopy": true, 00:36:12.463 "get_zone_info": false, 00:36:12.463 "zone_management": false, 00:36:12.463 "zone_append": false, 00:36:12.463 "compare": false, 00:36:12.463 "compare_and_write": false, 00:36:12.463 "abort": true, 00:36:12.463 "seek_hole": false, 00:36:12.463 "seek_data": false, 00:36:12.463 "copy": true, 00:36:12.722 "nvme_iov_md": false 00:36:12.722 }, 00:36:12.722 "memory_domains": [ 00:36:12.722 { 00:36:12.722 "dma_device_id": "system", 00:36:12.722 "dma_device_type": 1 00:36:12.722 }, 00:36:12.722 { 00:36:12.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:12.722 "dma_device_type": 2 00:36:12.722 } 00:36:12.722 ], 00:36:12.722 "driver_specific": {} 00:36:12.722 } 00:36:12.722 ] 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.722 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:12.722 "name": "Existed_Raid", 00:36:12.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.722 "strip_size_kb": 0, 00:36:12.722 "state": "configuring", 00:36:12.722 "raid_level": "raid1", 00:36:12.722 "superblock": false, 00:36:12.722 "num_base_bdevs": 4, 00:36:12.722 "num_base_bdevs_discovered": 1, 00:36:12.722 "num_base_bdevs_operational": 4, 00:36:12.722 "base_bdevs_list": [ 00:36:12.722 { 00:36:12.722 "name": "BaseBdev1", 00:36:12.722 "uuid": "ec0a9307-639c-48e1-a8eb-2b177207916a", 00:36:12.723 "is_configured": true, 00:36:12.723 "data_offset": 0, 00:36:12.723 "data_size": 65536 00:36:12.723 }, 00:36:12.723 { 00:36:12.723 "name": "BaseBdev2", 00:36:12.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.723 "is_configured": false, 00:36:12.723 "data_offset": 0, 00:36:12.723 "data_size": 0 00:36:12.723 }, 00:36:12.723 { 00:36:12.723 "name": "BaseBdev3", 00:36:12.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.723 "is_configured": false, 00:36:12.723 "data_offset": 0, 00:36:12.723 "data_size": 0 00:36:12.723 }, 00:36:12.723 { 00:36:12.723 "name": "BaseBdev4", 00:36:12.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.723 "is_configured": false, 00:36:12.723 "data_offset": 0, 00:36:12.723 "data_size": 0 00:36:12.723 } 00:36:12.723 ] 00:36:12.723 }' 00:36:12.723 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:12.723 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.982 [2024-12-09 23:17:53.537443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:12.982 [2024-12-09 23:17:53.537503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.982 [2024-12-09 23:17:53.545470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:12.982 [2024-12-09 23:17:53.547952] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:12.982 [2024-12-09 23:17:53.548120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:12.982 [2024-12-09 23:17:53.548218] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:12.982 [2024-12-09 23:17:53.548270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:12.982 [2024-12-09 23:17:53.548436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:12.982 [2024-12-09 23:17:53.548486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:12.982 "name": "Existed_Raid", 00:36:12.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.982 "strip_size_kb": 0, 00:36:12.982 "state": "configuring", 00:36:12.982 "raid_level": "raid1", 00:36:12.982 "superblock": false, 00:36:12.982 "num_base_bdevs": 4, 00:36:12.982 "num_base_bdevs_discovered": 1, 00:36:12.982 "num_base_bdevs_operational": 4, 00:36:12.982 "base_bdevs_list": [ 00:36:12.982 { 00:36:12.982 "name": "BaseBdev1", 00:36:12.982 "uuid": "ec0a9307-639c-48e1-a8eb-2b177207916a", 00:36:12.982 "is_configured": true, 00:36:12.982 "data_offset": 0, 00:36:12.982 "data_size": 65536 00:36:12.982 }, 00:36:12.982 { 00:36:12.982 "name": "BaseBdev2", 00:36:12.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.982 "is_configured": false, 00:36:12.982 "data_offset": 0, 00:36:12.982 "data_size": 0 00:36:12.982 }, 00:36:12.982 { 00:36:12.982 "name": "BaseBdev3", 00:36:12.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.982 "is_configured": false, 00:36:12.982 "data_offset": 0, 00:36:12.982 "data_size": 0 00:36:12.982 }, 00:36:12.982 { 00:36:12.982 "name": "BaseBdev4", 00:36:12.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.982 "is_configured": false, 00:36:12.982 "data_offset": 0, 00:36:12.982 "data_size": 0 00:36:12.982 } 00:36:12.982 ] 00:36:12.982 }' 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:12.982 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.548 23:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:13.549 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.549 23:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.549 [2024-12-09 23:17:54.026827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:13.549 BaseBdev2 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.549 [ 00:36:13.549 { 00:36:13.549 "name": "BaseBdev2", 00:36:13.549 "aliases": [ 00:36:13.549 "7b05022b-262a-40c0-a344-a04e8dfa1c8f" 00:36:13.549 ], 00:36:13.549 "product_name": "Malloc disk", 00:36:13.549 "block_size": 512, 00:36:13.549 "num_blocks": 65536, 00:36:13.549 "uuid": "7b05022b-262a-40c0-a344-a04e8dfa1c8f", 00:36:13.549 "assigned_rate_limits": { 00:36:13.549 "rw_ios_per_sec": 0, 00:36:13.549 "rw_mbytes_per_sec": 0, 00:36:13.549 "r_mbytes_per_sec": 0, 00:36:13.549 "w_mbytes_per_sec": 0 00:36:13.549 }, 00:36:13.549 "claimed": true, 00:36:13.549 "claim_type": "exclusive_write", 00:36:13.549 "zoned": false, 00:36:13.549 "supported_io_types": { 00:36:13.549 "read": true, 00:36:13.549 "write": true, 00:36:13.549 "unmap": true, 00:36:13.549 "flush": true, 00:36:13.549 "reset": true, 00:36:13.549 "nvme_admin": false, 00:36:13.549 "nvme_io": false, 00:36:13.549 "nvme_io_md": false, 00:36:13.549 "write_zeroes": true, 00:36:13.549 "zcopy": true, 00:36:13.549 "get_zone_info": false, 00:36:13.549 "zone_management": false, 00:36:13.549 "zone_append": false, 00:36:13.549 "compare": false, 00:36:13.549 "compare_and_write": false, 00:36:13.549 "abort": true, 00:36:13.549 "seek_hole": false, 00:36:13.549 "seek_data": false, 00:36:13.549 "copy": true, 00:36:13.549 "nvme_iov_md": false 00:36:13.549 }, 00:36:13.549 "memory_domains": [ 00:36:13.549 { 00:36:13.549 "dma_device_id": "system", 00:36:13.549 "dma_device_type": 1 00:36:13.549 }, 00:36:13.549 { 00:36:13.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:13.549 "dma_device_type": 2 00:36:13.549 } 00:36:13.549 ], 00:36:13.549 "driver_specific": {} 00:36:13.549 } 00:36:13.549 ] 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:13.549 "name": "Existed_Raid", 00:36:13.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.549 "strip_size_kb": 0, 00:36:13.549 "state": "configuring", 00:36:13.549 "raid_level": "raid1", 00:36:13.549 "superblock": false, 00:36:13.549 "num_base_bdevs": 4, 00:36:13.549 "num_base_bdevs_discovered": 2, 00:36:13.549 "num_base_bdevs_operational": 4, 00:36:13.549 "base_bdevs_list": [ 00:36:13.549 { 00:36:13.549 "name": "BaseBdev1", 00:36:13.549 "uuid": "ec0a9307-639c-48e1-a8eb-2b177207916a", 00:36:13.549 "is_configured": true, 00:36:13.549 "data_offset": 0, 00:36:13.549 "data_size": 65536 00:36:13.549 }, 00:36:13.549 { 00:36:13.549 "name": "BaseBdev2", 00:36:13.549 "uuid": "7b05022b-262a-40c0-a344-a04e8dfa1c8f", 00:36:13.549 "is_configured": true, 00:36:13.549 "data_offset": 0, 00:36:13.549 "data_size": 65536 00:36:13.549 }, 00:36:13.549 { 00:36:13.549 "name": "BaseBdev3", 00:36:13.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.549 "is_configured": false, 00:36:13.549 "data_offset": 0, 00:36:13.549 "data_size": 0 00:36:13.549 }, 00:36:13.549 { 00:36:13.549 "name": "BaseBdev4", 00:36:13.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.549 "is_configured": false, 00:36:13.549 "data_offset": 0, 00:36:13.549 "data_size": 0 00:36:13.549 } 00:36:13.549 ] 00:36:13.549 }' 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:13.549 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.124 [2024-12-09 23:17:54.586523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:14.124 BaseBdev3 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.124 [ 00:36:14.124 { 00:36:14.124 "name": "BaseBdev3", 00:36:14.124 "aliases": [ 00:36:14.124 "b486a227-a1ae-4c8d-949c-4c3e210d9607" 00:36:14.124 ], 00:36:14.124 "product_name": "Malloc disk", 00:36:14.124 "block_size": 512, 00:36:14.124 "num_blocks": 65536, 00:36:14.124 "uuid": "b486a227-a1ae-4c8d-949c-4c3e210d9607", 00:36:14.124 "assigned_rate_limits": { 00:36:14.124 "rw_ios_per_sec": 0, 00:36:14.124 "rw_mbytes_per_sec": 0, 00:36:14.124 "r_mbytes_per_sec": 0, 00:36:14.124 "w_mbytes_per_sec": 0 00:36:14.124 }, 00:36:14.124 "claimed": true, 00:36:14.124 "claim_type": "exclusive_write", 00:36:14.124 "zoned": false, 00:36:14.124 "supported_io_types": { 00:36:14.124 "read": true, 00:36:14.124 "write": true, 00:36:14.124 "unmap": true, 00:36:14.124 "flush": true, 00:36:14.124 "reset": true, 00:36:14.124 "nvme_admin": false, 00:36:14.124 "nvme_io": false, 00:36:14.124 "nvme_io_md": false, 00:36:14.124 "write_zeroes": true, 00:36:14.124 "zcopy": true, 00:36:14.124 "get_zone_info": false, 00:36:14.124 "zone_management": false, 00:36:14.124 "zone_append": false, 00:36:14.124 "compare": false, 00:36:14.124 "compare_and_write": false, 00:36:14.124 "abort": true, 00:36:14.124 "seek_hole": false, 00:36:14.124 "seek_data": false, 00:36:14.124 "copy": true, 00:36:14.124 "nvme_iov_md": false 00:36:14.124 }, 00:36:14.124 "memory_domains": [ 00:36:14.124 { 00:36:14.124 "dma_device_id": "system", 00:36:14.124 "dma_device_type": 1 00:36:14.124 }, 00:36:14.124 { 00:36:14.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:14.124 "dma_device_type": 2 00:36:14.124 } 00:36:14.124 ], 00:36:14.124 "driver_specific": {} 00:36:14.124 } 00:36:14.124 ] 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:14.124 "name": "Existed_Raid", 00:36:14.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.124 "strip_size_kb": 0, 00:36:14.124 "state": "configuring", 00:36:14.124 "raid_level": "raid1", 00:36:14.124 "superblock": false, 00:36:14.124 "num_base_bdevs": 4, 00:36:14.124 "num_base_bdevs_discovered": 3, 00:36:14.124 "num_base_bdevs_operational": 4, 00:36:14.124 "base_bdevs_list": [ 00:36:14.124 { 00:36:14.124 "name": "BaseBdev1", 00:36:14.124 "uuid": "ec0a9307-639c-48e1-a8eb-2b177207916a", 00:36:14.124 "is_configured": true, 00:36:14.124 "data_offset": 0, 00:36:14.124 "data_size": 65536 00:36:14.124 }, 00:36:14.124 { 00:36:14.124 "name": "BaseBdev2", 00:36:14.124 "uuid": "7b05022b-262a-40c0-a344-a04e8dfa1c8f", 00:36:14.124 "is_configured": true, 00:36:14.124 "data_offset": 0, 00:36:14.124 "data_size": 65536 00:36:14.124 }, 00:36:14.124 { 00:36:14.124 "name": "BaseBdev3", 00:36:14.124 "uuid": "b486a227-a1ae-4c8d-949c-4c3e210d9607", 00:36:14.124 "is_configured": true, 00:36:14.124 "data_offset": 0, 00:36:14.124 "data_size": 65536 00:36:14.124 }, 00:36:14.124 { 00:36:14.124 "name": "BaseBdev4", 00:36:14.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.124 "is_configured": false, 00:36:14.124 "data_offset": 0, 00:36:14.124 "data_size": 0 00:36:14.124 } 00:36:14.124 ] 00:36:14.124 }' 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:14.124 23:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.693 [2024-12-09 23:17:55.137135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:14.693 [2024-12-09 23:17:55.137340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:14.693 [2024-12-09 23:17:55.137362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:36:14.693 [2024-12-09 23:17:55.137700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:14.693 [2024-12-09 23:17:55.137891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:14.693 [2024-12-09 23:17:55.137908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:14.693 [2024-12-09 23:17:55.138195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:14.693 BaseBdev4 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.693 [ 00:36:14.693 { 00:36:14.693 "name": "BaseBdev4", 00:36:14.693 "aliases": [ 00:36:14.693 "f02ef3e3-53f7-41eb-8a66-05fb7cdff929" 00:36:14.693 ], 00:36:14.693 "product_name": "Malloc disk", 00:36:14.693 "block_size": 512, 00:36:14.693 "num_blocks": 65536, 00:36:14.693 "uuid": "f02ef3e3-53f7-41eb-8a66-05fb7cdff929", 00:36:14.693 "assigned_rate_limits": { 00:36:14.693 "rw_ios_per_sec": 0, 00:36:14.693 "rw_mbytes_per_sec": 0, 00:36:14.693 "r_mbytes_per_sec": 0, 00:36:14.693 "w_mbytes_per_sec": 0 00:36:14.693 }, 00:36:14.693 "claimed": true, 00:36:14.693 "claim_type": "exclusive_write", 00:36:14.693 "zoned": false, 00:36:14.693 "supported_io_types": { 00:36:14.693 "read": true, 00:36:14.693 "write": true, 00:36:14.693 "unmap": true, 00:36:14.693 "flush": true, 00:36:14.693 "reset": true, 00:36:14.693 "nvme_admin": false, 00:36:14.693 "nvme_io": false, 00:36:14.693 "nvme_io_md": false, 00:36:14.693 "write_zeroes": true, 00:36:14.693 "zcopy": true, 00:36:14.693 "get_zone_info": false, 00:36:14.693 "zone_management": false, 00:36:14.693 "zone_append": false, 00:36:14.693 "compare": false, 00:36:14.693 "compare_and_write": false, 00:36:14.693 "abort": true, 00:36:14.693 "seek_hole": false, 00:36:14.693 "seek_data": false, 00:36:14.693 "copy": true, 00:36:14.693 "nvme_iov_md": false 00:36:14.693 }, 00:36:14.693 "memory_domains": [ 00:36:14.693 { 00:36:14.693 "dma_device_id": "system", 00:36:14.693 "dma_device_type": 1 00:36:14.693 }, 00:36:14.693 { 00:36:14.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:14.693 "dma_device_type": 2 00:36:14.693 } 00:36:14.693 ], 00:36:14.693 "driver_specific": {} 00:36:14.693 } 00:36:14.693 ] 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:14.693 "name": "Existed_Raid", 00:36:14.693 "uuid": "8f37cd1b-f81a-4e23-b2c6-d3c700d1ecd8", 00:36:14.693 "strip_size_kb": 0, 00:36:14.693 "state": "online", 00:36:14.693 "raid_level": "raid1", 00:36:14.693 "superblock": false, 00:36:14.693 "num_base_bdevs": 4, 00:36:14.693 "num_base_bdevs_discovered": 4, 00:36:14.693 "num_base_bdevs_operational": 4, 00:36:14.693 "base_bdevs_list": [ 00:36:14.693 { 00:36:14.693 "name": "BaseBdev1", 00:36:14.693 "uuid": "ec0a9307-639c-48e1-a8eb-2b177207916a", 00:36:14.693 "is_configured": true, 00:36:14.693 "data_offset": 0, 00:36:14.693 "data_size": 65536 00:36:14.693 }, 00:36:14.693 { 00:36:14.693 "name": "BaseBdev2", 00:36:14.693 "uuid": "7b05022b-262a-40c0-a344-a04e8dfa1c8f", 00:36:14.693 "is_configured": true, 00:36:14.693 "data_offset": 0, 00:36:14.693 "data_size": 65536 00:36:14.693 }, 00:36:14.693 { 00:36:14.693 "name": "BaseBdev3", 00:36:14.693 "uuid": "b486a227-a1ae-4c8d-949c-4c3e210d9607", 00:36:14.693 "is_configured": true, 00:36:14.693 "data_offset": 0, 00:36:14.693 "data_size": 65536 00:36:14.693 }, 00:36:14.693 { 00:36:14.693 "name": "BaseBdev4", 00:36:14.693 "uuid": "f02ef3e3-53f7-41eb-8a66-05fb7cdff929", 00:36:14.693 "is_configured": true, 00:36:14.693 "data_offset": 0, 00:36:14.693 "data_size": 65536 00:36:14.693 } 00:36:14.693 ] 00:36:14.693 }' 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:14.693 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.260 [2024-12-09 23:17:55.620871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.260 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:15.260 "name": "Existed_Raid", 00:36:15.260 "aliases": [ 00:36:15.260 "8f37cd1b-f81a-4e23-b2c6-d3c700d1ecd8" 00:36:15.260 ], 00:36:15.260 "product_name": "Raid Volume", 00:36:15.260 "block_size": 512, 00:36:15.260 "num_blocks": 65536, 00:36:15.260 "uuid": "8f37cd1b-f81a-4e23-b2c6-d3c700d1ecd8", 00:36:15.260 "assigned_rate_limits": { 00:36:15.260 "rw_ios_per_sec": 0, 00:36:15.260 "rw_mbytes_per_sec": 0, 00:36:15.260 "r_mbytes_per_sec": 0, 00:36:15.260 "w_mbytes_per_sec": 0 00:36:15.260 }, 00:36:15.260 "claimed": false, 00:36:15.260 "zoned": false, 00:36:15.260 "supported_io_types": { 00:36:15.260 "read": true, 00:36:15.260 "write": true, 00:36:15.260 "unmap": false, 00:36:15.260 "flush": false, 00:36:15.260 "reset": true, 00:36:15.260 "nvme_admin": false, 00:36:15.260 "nvme_io": false, 00:36:15.260 "nvme_io_md": false, 00:36:15.260 "write_zeroes": true, 00:36:15.260 "zcopy": false, 00:36:15.260 "get_zone_info": false, 00:36:15.260 "zone_management": false, 00:36:15.260 "zone_append": false, 00:36:15.260 "compare": false, 00:36:15.260 "compare_and_write": false, 00:36:15.260 "abort": false, 00:36:15.260 "seek_hole": false, 00:36:15.260 "seek_data": false, 00:36:15.260 "copy": false, 00:36:15.260 "nvme_iov_md": false 00:36:15.260 }, 00:36:15.260 "memory_domains": [ 00:36:15.260 { 00:36:15.260 "dma_device_id": "system", 00:36:15.260 "dma_device_type": 1 00:36:15.260 }, 00:36:15.260 { 00:36:15.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:15.260 "dma_device_type": 2 00:36:15.260 }, 00:36:15.260 { 00:36:15.260 "dma_device_id": "system", 00:36:15.260 "dma_device_type": 1 00:36:15.260 }, 00:36:15.260 { 00:36:15.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:15.260 "dma_device_type": 2 00:36:15.260 }, 00:36:15.260 { 00:36:15.260 "dma_device_id": "system", 00:36:15.260 "dma_device_type": 1 00:36:15.260 }, 00:36:15.260 { 00:36:15.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:15.260 "dma_device_type": 2 00:36:15.260 }, 00:36:15.260 { 00:36:15.260 "dma_device_id": "system", 00:36:15.260 "dma_device_type": 1 00:36:15.260 }, 00:36:15.260 { 00:36:15.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:15.260 "dma_device_type": 2 00:36:15.260 } 00:36:15.260 ], 00:36:15.260 "driver_specific": { 00:36:15.260 "raid": { 00:36:15.260 "uuid": "8f37cd1b-f81a-4e23-b2c6-d3c700d1ecd8", 00:36:15.260 "strip_size_kb": 0, 00:36:15.260 "state": "online", 00:36:15.260 "raid_level": "raid1", 00:36:15.260 "superblock": false, 00:36:15.260 "num_base_bdevs": 4, 00:36:15.260 "num_base_bdevs_discovered": 4, 00:36:15.260 "num_base_bdevs_operational": 4, 00:36:15.260 "base_bdevs_list": [ 00:36:15.260 { 00:36:15.260 "name": "BaseBdev1", 00:36:15.260 "uuid": "ec0a9307-639c-48e1-a8eb-2b177207916a", 00:36:15.260 "is_configured": true, 00:36:15.260 "data_offset": 0, 00:36:15.260 "data_size": 65536 00:36:15.260 }, 00:36:15.260 { 00:36:15.260 "name": "BaseBdev2", 00:36:15.260 "uuid": "7b05022b-262a-40c0-a344-a04e8dfa1c8f", 00:36:15.260 "is_configured": true, 00:36:15.260 "data_offset": 0, 00:36:15.260 "data_size": 65536 00:36:15.260 }, 00:36:15.260 { 00:36:15.260 "name": "BaseBdev3", 00:36:15.260 "uuid": "b486a227-a1ae-4c8d-949c-4c3e210d9607", 00:36:15.260 "is_configured": true, 00:36:15.260 "data_offset": 0, 00:36:15.260 "data_size": 65536 00:36:15.260 }, 00:36:15.261 { 00:36:15.261 "name": "BaseBdev4", 00:36:15.261 "uuid": "f02ef3e3-53f7-41eb-8a66-05fb7cdff929", 00:36:15.261 "is_configured": true, 00:36:15.261 "data_offset": 0, 00:36:15.261 "data_size": 65536 00:36:15.261 } 00:36:15.261 ] 00:36:15.261 } 00:36:15.261 } 00:36:15.261 }' 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:15.261 BaseBdev2 00:36:15.261 BaseBdev3 00:36:15.261 BaseBdev4' 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:15.261 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.519 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:15.519 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:15.519 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:15.519 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:36:15.519 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.519 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.520 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:15.520 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.520 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:15.520 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:15.520 23:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:15.520 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.520 23:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.520 [2024-12-09 23:17:55.968095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:15.520 "name": "Existed_Raid", 00:36:15.520 "uuid": "8f37cd1b-f81a-4e23-b2c6-d3c700d1ecd8", 00:36:15.520 "strip_size_kb": 0, 00:36:15.520 "state": "online", 00:36:15.520 "raid_level": "raid1", 00:36:15.520 "superblock": false, 00:36:15.520 "num_base_bdevs": 4, 00:36:15.520 "num_base_bdevs_discovered": 3, 00:36:15.520 "num_base_bdevs_operational": 3, 00:36:15.520 "base_bdevs_list": [ 00:36:15.520 { 00:36:15.520 "name": null, 00:36:15.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:15.520 "is_configured": false, 00:36:15.520 "data_offset": 0, 00:36:15.520 "data_size": 65536 00:36:15.520 }, 00:36:15.520 { 00:36:15.520 "name": "BaseBdev2", 00:36:15.520 "uuid": "7b05022b-262a-40c0-a344-a04e8dfa1c8f", 00:36:15.520 "is_configured": true, 00:36:15.520 "data_offset": 0, 00:36:15.520 "data_size": 65536 00:36:15.520 }, 00:36:15.520 { 00:36:15.520 "name": "BaseBdev3", 00:36:15.520 "uuid": "b486a227-a1ae-4c8d-949c-4c3e210d9607", 00:36:15.520 "is_configured": true, 00:36:15.520 "data_offset": 0, 00:36:15.520 "data_size": 65536 00:36:15.520 }, 00:36:15.520 { 00:36:15.520 "name": "BaseBdev4", 00:36:15.520 "uuid": "f02ef3e3-53f7-41eb-8a66-05fb7cdff929", 00:36:15.520 "is_configured": true, 00:36:15.520 "data_offset": 0, 00:36:15.520 "data_size": 65536 00:36:15.520 } 00:36:15.520 ] 00:36:15.520 }' 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:15.520 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.089 [2024-12-09 23:17:56.543955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.089 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.089 [2024-12-09 23:17:56.698885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.349 [2024-12-09 23:17:56.852280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:36:16.349 [2024-12-09 23:17:56.852383] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:16.349 [2024-12-09 23:17:56.950748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:16.349 [2024-12-09 23:17:56.950809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:16.349 [2024-12-09 23:17:56.950825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.349 23:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.610 BaseBdev2 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.610 [ 00:36:16.610 { 00:36:16.610 "name": "BaseBdev2", 00:36:16.610 "aliases": [ 00:36:16.610 "64df890b-7ed4-476d-8a86-4235e915b1f7" 00:36:16.610 ], 00:36:16.610 "product_name": "Malloc disk", 00:36:16.610 "block_size": 512, 00:36:16.610 "num_blocks": 65536, 00:36:16.610 "uuid": "64df890b-7ed4-476d-8a86-4235e915b1f7", 00:36:16.610 "assigned_rate_limits": { 00:36:16.610 "rw_ios_per_sec": 0, 00:36:16.610 "rw_mbytes_per_sec": 0, 00:36:16.610 "r_mbytes_per_sec": 0, 00:36:16.610 "w_mbytes_per_sec": 0 00:36:16.610 }, 00:36:16.610 "claimed": false, 00:36:16.610 "zoned": false, 00:36:16.610 "supported_io_types": { 00:36:16.610 "read": true, 00:36:16.610 "write": true, 00:36:16.610 "unmap": true, 00:36:16.610 "flush": true, 00:36:16.610 "reset": true, 00:36:16.610 "nvme_admin": false, 00:36:16.610 "nvme_io": false, 00:36:16.610 "nvme_io_md": false, 00:36:16.610 "write_zeroes": true, 00:36:16.610 "zcopy": true, 00:36:16.610 "get_zone_info": false, 00:36:16.610 "zone_management": false, 00:36:16.610 "zone_append": false, 00:36:16.610 "compare": false, 00:36:16.610 "compare_and_write": false, 00:36:16.610 "abort": true, 00:36:16.610 "seek_hole": false, 00:36:16.610 "seek_data": false, 00:36:16.610 "copy": true, 00:36:16.610 "nvme_iov_md": false 00:36:16.610 }, 00:36:16.610 "memory_domains": [ 00:36:16.610 { 00:36:16.610 "dma_device_id": "system", 00:36:16.610 "dma_device_type": 1 00:36:16.610 }, 00:36:16.610 { 00:36:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.610 "dma_device_type": 2 00:36:16.610 } 00:36:16.610 ], 00:36:16.610 "driver_specific": {} 00:36:16.610 } 00:36:16.610 ] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.610 BaseBdev3 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.610 [ 00:36:16.610 { 00:36:16.610 "name": "BaseBdev3", 00:36:16.610 "aliases": [ 00:36:16.610 "7691b8ee-5c53-4af5-a975-f949959d5df3" 00:36:16.610 ], 00:36:16.610 "product_name": "Malloc disk", 00:36:16.610 "block_size": 512, 00:36:16.610 "num_blocks": 65536, 00:36:16.610 "uuid": "7691b8ee-5c53-4af5-a975-f949959d5df3", 00:36:16.610 "assigned_rate_limits": { 00:36:16.610 "rw_ios_per_sec": 0, 00:36:16.610 "rw_mbytes_per_sec": 0, 00:36:16.610 "r_mbytes_per_sec": 0, 00:36:16.610 "w_mbytes_per_sec": 0 00:36:16.610 }, 00:36:16.610 "claimed": false, 00:36:16.610 "zoned": false, 00:36:16.610 "supported_io_types": { 00:36:16.610 "read": true, 00:36:16.610 "write": true, 00:36:16.610 "unmap": true, 00:36:16.610 "flush": true, 00:36:16.610 "reset": true, 00:36:16.610 "nvme_admin": false, 00:36:16.610 "nvme_io": false, 00:36:16.610 "nvme_io_md": false, 00:36:16.610 "write_zeroes": true, 00:36:16.610 "zcopy": true, 00:36:16.610 "get_zone_info": false, 00:36:16.610 "zone_management": false, 00:36:16.610 "zone_append": false, 00:36:16.610 "compare": false, 00:36:16.610 "compare_and_write": false, 00:36:16.610 "abort": true, 00:36:16.610 "seek_hole": false, 00:36:16.610 "seek_data": false, 00:36:16.610 "copy": true, 00:36:16.610 "nvme_iov_md": false 00:36:16.610 }, 00:36:16.610 "memory_domains": [ 00:36:16.610 { 00:36:16.610 "dma_device_id": "system", 00:36:16.610 "dma_device_type": 1 00:36:16.610 }, 00:36:16.610 { 00:36:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.610 "dma_device_type": 2 00:36:16.610 } 00:36:16.610 ], 00:36:16.610 "driver_specific": {} 00:36:16.610 } 00:36:16.610 ] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.610 BaseBdev4 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.610 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.874 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.874 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:16.874 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.874 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.874 [ 00:36:16.874 { 00:36:16.874 "name": "BaseBdev4", 00:36:16.874 "aliases": [ 00:36:16.874 "1d328aa1-adca-4826-9760-82bd38cb1fcc" 00:36:16.874 ], 00:36:16.874 "product_name": "Malloc disk", 00:36:16.874 "block_size": 512, 00:36:16.874 "num_blocks": 65536, 00:36:16.875 "uuid": "1d328aa1-adca-4826-9760-82bd38cb1fcc", 00:36:16.875 "assigned_rate_limits": { 00:36:16.875 "rw_ios_per_sec": 0, 00:36:16.875 "rw_mbytes_per_sec": 0, 00:36:16.875 "r_mbytes_per_sec": 0, 00:36:16.875 "w_mbytes_per_sec": 0 00:36:16.875 }, 00:36:16.875 "claimed": false, 00:36:16.875 "zoned": false, 00:36:16.875 "supported_io_types": { 00:36:16.875 "read": true, 00:36:16.875 "write": true, 00:36:16.875 "unmap": true, 00:36:16.875 "flush": true, 00:36:16.875 "reset": true, 00:36:16.875 "nvme_admin": false, 00:36:16.875 "nvme_io": false, 00:36:16.875 "nvme_io_md": false, 00:36:16.875 "write_zeroes": true, 00:36:16.875 "zcopy": true, 00:36:16.875 "get_zone_info": false, 00:36:16.875 "zone_management": false, 00:36:16.875 "zone_append": false, 00:36:16.875 "compare": false, 00:36:16.875 "compare_and_write": false, 00:36:16.875 "abort": true, 00:36:16.875 "seek_hole": false, 00:36:16.875 "seek_data": false, 00:36:16.875 "copy": true, 00:36:16.875 "nvme_iov_md": false 00:36:16.875 }, 00:36:16.875 "memory_domains": [ 00:36:16.875 { 00:36:16.875 "dma_device_id": "system", 00:36:16.875 "dma_device_type": 1 00:36:16.875 }, 00:36:16.875 { 00:36:16.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.875 "dma_device_type": 2 00:36:16.875 } 00:36:16.875 ], 00:36:16.875 "driver_specific": {} 00:36:16.875 } 00:36:16.875 ] 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.875 [2024-12-09 23:17:57.282328] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:16.875 [2024-12-09 23:17:57.282520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:16.875 [2024-12-09 23:17:57.282630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:16.875 [2024-12-09 23:17:57.284853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:16.875 [2024-12-09 23:17:57.285023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:16.875 "name": "Existed_Raid", 00:36:16.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.875 "strip_size_kb": 0, 00:36:16.875 "state": "configuring", 00:36:16.875 "raid_level": "raid1", 00:36:16.875 "superblock": false, 00:36:16.875 "num_base_bdevs": 4, 00:36:16.875 "num_base_bdevs_discovered": 3, 00:36:16.875 "num_base_bdevs_operational": 4, 00:36:16.875 "base_bdevs_list": [ 00:36:16.875 { 00:36:16.875 "name": "BaseBdev1", 00:36:16.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.875 "is_configured": false, 00:36:16.875 "data_offset": 0, 00:36:16.875 "data_size": 0 00:36:16.875 }, 00:36:16.875 { 00:36:16.875 "name": "BaseBdev2", 00:36:16.875 "uuid": "64df890b-7ed4-476d-8a86-4235e915b1f7", 00:36:16.875 "is_configured": true, 00:36:16.875 "data_offset": 0, 00:36:16.875 "data_size": 65536 00:36:16.875 }, 00:36:16.875 { 00:36:16.875 "name": "BaseBdev3", 00:36:16.875 "uuid": "7691b8ee-5c53-4af5-a975-f949959d5df3", 00:36:16.875 "is_configured": true, 00:36:16.875 "data_offset": 0, 00:36:16.875 "data_size": 65536 00:36:16.875 }, 00:36:16.875 { 00:36:16.875 "name": "BaseBdev4", 00:36:16.875 "uuid": "1d328aa1-adca-4826-9760-82bd38cb1fcc", 00:36:16.875 "is_configured": true, 00:36:16.875 "data_offset": 0, 00:36:16.875 "data_size": 65536 00:36:16.875 } 00:36:16.875 ] 00:36:16.875 }' 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:16.875 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.139 [2024-12-09 23:17:57.709735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:17.139 "name": "Existed_Raid", 00:36:17.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:17.139 "strip_size_kb": 0, 00:36:17.139 "state": "configuring", 00:36:17.139 "raid_level": "raid1", 00:36:17.139 "superblock": false, 00:36:17.139 "num_base_bdevs": 4, 00:36:17.139 "num_base_bdevs_discovered": 2, 00:36:17.139 "num_base_bdevs_operational": 4, 00:36:17.139 "base_bdevs_list": [ 00:36:17.139 { 00:36:17.139 "name": "BaseBdev1", 00:36:17.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:17.139 "is_configured": false, 00:36:17.139 "data_offset": 0, 00:36:17.139 "data_size": 0 00:36:17.139 }, 00:36:17.139 { 00:36:17.139 "name": null, 00:36:17.139 "uuid": "64df890b-7ed4-476d-8a86-4235e915b1f7", 00:36:17.139 "is_configured": false, 00:36:17.139 "data_offset": 0, 00:36:17.139 "data_size": 65536 00:36:17.139 }, 00:36:17.139 { 00:36:17.139 "name": "BaseBdev3", 00:36:17.139 "uuid": "7691b8ee-5c53-4af5-a975-f949959d5df3", 00:36:17.139 "is_configured": true, 00:36:17.139 "data_offset": 0, 00:36:17.139 "data_size": 65536 00:36:17.139 }, 00:36:17.139 { 00:36:17.139 "name": "BaseBdev4", 00:36:17.139 "uuid": "1d328aa1-adca-4826-9760-82bd38cb1fcc", 00:36:17.139 "is_configured": true, 00:36:17.139 "data_offset": 0, 00:36:17.139 "data_size": 65536 00:36:17.139 } 00:36:17.139 ] 00:36:17.139 }' 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:17.139 23:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.716 [2024-12-09 23:17:58.193966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:17.716 BaseBdev1 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.716 [ 00:36:17.716 { 00:36:17.716 "name": "BaseBdev1", 00:36:17.716 "aliases": [ 00:36:17.716 "a904813c-6dc8-4472-ae4b-d0e2980a844b" 00:36:17.716 ], 00:36:17.716 "product_name": "Malloc disk", 00:36:17.716 "block_size": 512, 00:36:17.716 "num_blocks": 65536, 00:36:17.716 "uuid": "a904813c-6dc8-4472-ae4b-d0e2980a844b", 00:36:17.716 "assigned_rate_limits": { 00:36:17.716 "rw_ios_per_sec": 0, 00:36:17.716 "rw_mbytes_per_sec": 0, 00:36:17.716 "r_mbytes_per_sec": 0, 00:36:17.716 "w_mbytes_per_sec": 0 00:36:17.716 }, 00:36:17.716 "claimed": true, 00:36:17.716 "claim_type": "exclusive_write", 00:36:17.716 "zoned": false, 00:36:17.716 "supported_io_types": { 00:36:17.716 "read": true, 00:36:17.716 "write": true, 00:36:17.716 "unmap": true, 00:36:17.716 "flush": true, 00:36:17.716 "reset": true, 00:36:17.716 "nvme_admin": false, 00:36:17.716 "nvme_io": false, 00:36:17.716 "nvme_io_md": false, 00:36:17.716 "write_zeroes": true, 00:36:17.716 "zcopy": true, 00:36:17.716 "get_zone_info": false, 00:36:17.716 "zone_management": false, 00:36:17.716 "zone_append": false, 00:36:17.716 "compare": false, 00:36:17.716 "compare_and_write": false, 00:36:17.716 "abort": true, 00:36:17.716 "seek_hole": false, 00:36:17.716 "seek_data": false, 00:36:17.716 "copy": true, 00:36:17.716 "nvme_iov_md": false 00:36:17.716 }, 00:36:17.716 "memory_domains": [ 00:36:17.716 { 00:36:17.716 "dma_device_id": "system", 00:36:17.716 "dma_device_type": 1 00:36:17.716 }, 00:36:17.716 { 00:36:17.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:17.716 "dma_device_type": 2 00:36:17.716 } 00:36:17.716 ], 00:36:17.716 "driver_specific": {} 00:36:17.716 } 00:36:17.716 ] 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.716 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:17.717 "name": "Existed_Raid", 00:36:17.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:17.717 "strip_size_kb": 0, 00:36:17.717 "state": "configuring", 00:36:17.717 "raid_level": "raid1", 00:36:17.717 "superblock": false, 00:36:17.717 "num_base_bdevs": 4, 00:36:17.717 "num_base_bdevs_discovered": 3, 00:36:17.717 "num_base_bdevs_operational": 4, 00:36:17.717 "base_bdevs_list": [ 00:36:17.717 { 00:36:17.717 "name": "BaseBdev1", 00:36:17.717 "uuid": "a904813c-6dc8-4472-ae4b-d0e2980a844b", 00:36:17.717 "is_configured": true, 00:36:17.717 "data_offset": 0, 00:36:17.717 "data_size": 65536 00:36:17.717 }, 00:36:17.717 { 00:36:17.717 "name": null, 00:36:17.717 "uuid": "64df890b-7ed4-476d-8a86-4235e915b1f7", 00:36:17.717 "is_configured": false, 00:36:17.717 "data_offset": 0, 00:36:17.717 "data_size": 65536 00:36:17.717 }, 00:36:17.717 { 00:36:17.717 "name": "BaseBdev3", 00:36:17.717 "uuid": "7691b8ee-5c53-4af5-a975-f949959d5df3", 00:36:17.717 "is_configured": true, 00:36:17.717 "data_offset": 0, 00:36:17.717 "data_size": 65536 00:36:17.717 }, 00:36:17.717 { 00:36:17.717 "name": "BaseBdev4", 00:36:17.717 "uuid": "1d328aa1-adca-4826-9760-82bd38cb1fcc", 00:36:17.717 "is_configured": true, 00:36:17.717 "data_offset": 0, 00:36:17.717 "data_size": 65536 00:36:17.717 } 00:36:17.717 ] 00:36:17.717 }' 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:17.717 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.284 [2024-12-09 23:17:58.673576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.284 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:18.284 "name": "Existed_Raid", 00:36:18.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:18.284 "strip_size_kb": 0, 00:36:18.284 "state": "configuring", 00:36:18.284 "raid_level": "raid1", 00:36:18.284 "superblock": false, 00:36:18.284 "num_base_bdevs": 4, 00:36:18.284 "num_base_bdevs_discovered": 2, 00:36:18.284 "num_base_bdevs_operational": 4, 00:36:18.284 "base_bdevs_list": [ 00:36:18.284 { 00:36:18.284 "name": "BaseBdev1", 00:36:18.284 "uuid": "a904813c-6dc8-4472-ae4b-d0e2980a844b", 00:36:18.284 "is_configured": true, 00:36:18.284 "data_offset": 0, 00:36:18.284 "data_size": 65536 00:36:18.284 }, 00:36:18.284 { 00:36:18.284 "name": null, 00:36:18.284 "uuid": "64df890b-7ed4-476d-8a86-4235e915b1f7", 00:36:18.284 "is_configured": false, 00:36:18.284 "data_offset": 0, 00:36:18.285 "data_size": 65536 00:36:18.285 }, 00:36:18.285 { 00:36:18.285 "name": null, 00:36:18.285 "uuid": "7691b8ee-5c53-4af5-a975-f949959d5df3", 00:36:18.285 "is_configured": false, 00:36:18.285 "data_offset": 0, 00:36:18.285 "data_size": 65536 00:36:18.285 }, 00:36:18.285 { 00:36:18.285 "name": "BaseBdev4", 00:36:18.285 "uuid": "1d328aa1-adca-4826-9760-82bd38cb1fcc", 00:36:18.285 "is_configured": true, 00:36:18.285 "data_offset": 0, 00:36:18.285 "data_size": 65536 00:36:18.285 } 00:36:18.285 ] 00:36:18.285 }' 00:36:18.285 23:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:18.285 23:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.543 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:18.543 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.543 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.543 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.543 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.543 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:18.543 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:18.543 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.543 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.543 [2024-12-09 23:17:59.168950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:18.543 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.544 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:18.544 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:18.544 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:18.544 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:18.544 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:18.544 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:18.544 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:18.544 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:18.544 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:18.544 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:18.803 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:18.803 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.803 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.803 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.803 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.803 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:18.803 "name": "Existed_Raid", 00:36:18.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:18.803 "strip_size_kb": 0, 00:36:18.803 "state": "configuring", 00:36:18.803 "raid_level": "raid1", 00:36:18.803 "superblock": false, 00:36:18.803 "num_base_bdevs": 4, 00:36:18.803 "num_base_bdevs_discovered": 3, 00:36:18.803 "num_base_bdevs_operational": 4, 00:36:18.803 "base_bdevs_list": [ 00:36:18.803 { 00:36:18.803 "name": "BaseBdev1", 00:36:18.803 "uuid": "a904813c-6dc8-4472-ae4b-d0e2980a844b", 00:36:18.803 "is_configured": true, 00:36:18.803 "data_offset": 0, 00:36:18.803 "data_size": 65536 00:36:18.803 }, 00:36:18.803 { 00:36:18.803 "name": null, 00:36:18.803 "uuid": "64df890b-7ed4-476d-8a86-4235e915b1f7", 00:36:18.803 "is_configured": false, 00:36:18.803 "data_offset": 0, 00:36:18.803 "data_size": 65536 00:36:18.803 }, 00:36:18.803 { 00:36:18.803 "name": "BaseBdev3", 00:36:18.803 "uuid": "7691b8ee-5c53-4af5-a975-f949959d5df3", 00:36:18.803 "is_configured": true, 00:36:18.803 "data_offset": 0, 00:36:18.803 "data_size": 65536 00:36:18.803 }, 00:36:18.803 { 00:36:18.803 "name": "BaseBdev4", 00:36:18.803 "uuid": "1d328aa1-adca-4826-9760-82bd38cb1fcc", 00:36:18.803 "is_configured": true, 00:36:18.803 "data_offset": 0, 00:36:18.803 "data_size": 65536 00:36:18.803 } 00:36:18.803 ] 00:36:18.803 }' 00:36:18.803 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:18.803 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.062 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:19.062 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.062 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:19.062 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.062 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.062 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:19.062 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:19.062 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.062 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.062 [2024-12-09 23:17:59.608600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:19.321 "name": "Existed_Raid", 00:36:19.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.321 "strip_size_kb": 0, 00:36:19.321 "state": "configuring", 00:36:19.321 "raid_level": "raid1", 00:36:19.321 "superblock": false, 00:36:19.321 "num_base_bdevs": 4, 00:36:19.321 "num_base_bdevs_discovered": 2, 00:36:19.321 "num_base_bdevs_operational": 4, 00:36:19.321 "base_bdevs_list": [ 00:36:19.321 { 00:36:19.321 "name": null, 00:36:19.321 "uuid": "a904813c-6dc8-4472-ae4b-d0e2980a844b", 00:36:19.321 "is_configured": false, 00:36:19.321 "data_offset": 0, 00:36:19.321 "data_size": 65536 00:36:19.321 }, 00:36:19.321 { 00:36:19.321 "name": null, 00:36:19.321 "uuid": "64df890b-7ed4-476d-8a86-4235e915b1f7", 00:36:19.321 "is_configured": false, 00:36:19.321 "data_offset": 0, 00:36:19.321 "data_size": 65536 00:36:19.321 }, 00:36:19.321 { 00:36:19.321 "name": "BaseBdev3", 00:36:19.321 "uuid": "7691b8ee-5c53-4af5-a975-f949959d5df3", 00:36:19.321 "is_configured": true, 00:36:19.321 "data_offset": 0, 00:36:19.321 "data_size": 65536 00:36:19.321 }, 00:36:19.321 { 00:36:19.321 "name": "BaseBdev4", 00:36:19.321 "uuid": "1d328aa1-adca-4826-9760-82bd38cb1fcc", 00:36:19.321 "is_configured": true, 00:36:19.321 "data_offset": 0, 00:36:19.321 "data_size": 65536 00:36:19.321 } 00:36:19.321 ] 00:36:19.321 }' 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:19.321 23:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.582 [2024-12-09 23:18:00.183510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.582 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.841 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.841 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:19.841 "name": "Existed_Raid", 00:36:19.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.841 "strip_size_kb": 0, 00:36:19.841 "state": "configuring", 00:36:19.841 "raid_level": "raid1", 00:36:19.841 "superblock": false, 00:36:19.841 "num_base_bdevs": 4, 00:36:19.841 "num_base_bdevs_discovered": 3, 00:36:19.841 "num_base_bdevs_operational": 4, 00:36:19.841 "base_bdevs_list": [ 00:36:19.841 { 00:36:19.841 "name": null, 00:36:19.841 "uuid": "a904813c-6dc8-4472-ae4b-d0e2980a844b", 00:36:19.841 "is_configured": false, 00:36:19.842 "data_offset": 0, 00:36:19.842 "data_size": 65536 00:36:19.842 }, 00:36:19.842 { 00:36:19.842 "name": "BaseBdev2", 00:36:19.842 "uuid": "64df890b-7ed4-476d-8a86-4235e915b1f7", 00:36:19.842 "is_configured": true, 00:36:19.842 "data_offset": 0, 00:36:19.842 "data_size": 65536 00:36:19.842 }, 00:36:19.842 { 00:36:19.842 "name": "BaseBdev3", 00:36:19.842 "uuid": "7691b8ee-5c53-4af5-a975-f949959d5df3", 00:36:19.842 "is_configured": true, 00:36:19.842 "data_offset": 0, 00:36:19.842 "data_size": 65536 00:36:19.842 }, 00:36:19.842 { 00:36:19.842 "name": "BaseBdev4", 00:36:19.842 "uuid": "1d328aa1-adca-4826-9760-82bd38cb1fcc", 00:36:19.842 "is_configured": true, 00:36:19.842 "data_offset": 0, 00:36:19.842 "data_size": 65536 00:36:19.842 } 00:36:19.842 ] 00:36:19.842 }' 00:36:19.842 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:19.842 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a904813c-6dc8-4472-ae4b-d0e2980a844b 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.100 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.361 [2024-12-09 23:18:00.754238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:20.361 [2024-12-09 23:18:00.754314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:20.361 [2024-12-09 23:18:00.754330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:36:20.361 [2024-12-09 23:18:00.754645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:36:20.361 [2024-12-09 23:18:00.754815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:20.361 [2024-12-09 23:18:00.754828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:36:20.361 [2024-12-09 23:18:00.755114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:20.361 NewBaseBdev 00:36:20.361 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.361 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:20.361 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:36:20.361 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:20.361 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.362 [ 00:36:20.362 { 00:36:20.362 "name": "NewBaseBdev", 00:36:20.362 "aliases": [ 00:36:20.362 "a904813c-6dc8-4472-ae4b-d0e2980a844b" 00:36:20.362 ], 00:36:20.362 "product_name": "Malloc disk", 00:36:20.362 "block_size": 512, 00:36:20.362 "num_blocks": 65536, 00:36:20.362 "uuid": "a904813c-6dc8-4472-ae4b-d0e2980a844b", 00:36:20.362 "assigned_rate_limits": { 00:36:20.362 "rw_ios_per_sec": 0, 00:36:20.362 "rw_mbytes_per_sec": 0, 00:36:20.362 "r_mbytes_per_sec": 0, 00:36:20.362 "w_mbytes_per_sec": 0 00:36:20.362 }, 00:36:20.362 "claimed": true, 00:36:20.362 "claim_type": "exclusive_write", 00:36:20.362 "zoned": false, 00:36:20.362 "supported_io_types": { 00:36:20.362 "read": true, 00:36:20.362 "write": true, 00:36:20.362 "unmap": true, 00:36:20.362 "flush": true, 00:36:20.362 "reset": true, 00:36:20.362 "nvme_admin": false, 00:36:20.362 "nvme_io": false, 00:36:20.362 "nvme_io_md": false, 00:36:20.362 "write_zeroes": true, 00:36:20.362 "zcopy": true, 00:36:20.362 "get_zone_info": false, 00:36:20.362 "zone_management": false, 00:36:20.362 "zone_append": false, 00:36:20.362 "compare": false, 00:36:20.362 "compare_and_write": false, 00:36:20.362 "abort": true, 00:36:20.362 "seek_hole": false, 00:36:20.362 "seek_data": false, 00:36:20.362 "copy": true, 00:36:20.362 "nvme_iov_md": false 00:36:20.362 }, 00:36:20.362 "memory_domains": [ 00:36:20.362 { 00:36:20.362 "dma_device_id": "system", 00:36:20.362 "dma_device_type": 1 00:36:20.362 }, 00:36:20.362 { 00:36:20.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:20.362 "dma_device_type": 2 00:36:20.362 } 00:36:20.362 ], 00:36:20.362 "driver_specific": {} 00:36:20.362 } 00:36:20.362 ] 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.362 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:20.362 "name": "Existed_Raid", 00:36:20.362 "uuid": "fb62cd82-ec16-487c-a7fa-db5709f7dc65", 00:36:20.362 "strip_size_kb": 0, 00:36:20.362 "state": "online", 00:36:20.362 "raid_level": "raid1", 00:36:20.362 "superblock": false, 00:36:20.362 "num_base_bdevs": 4, 00:36:20.362 "num_base_bdevs_discovered": 4, 00:36:20.362 "num_base_bdevs_operational": 4, 00:36:20.362 "base_bdevs_list": [ 00:36:20.362 { 00:36:20.362 "name": "NewBaseBdev", 00:36:20.362 "uuid": "a904813c-6dc8-4472-ae4b-d0e2980a844b", 00:36:20.362 "is_configured": true, 00:36:20.362 "data_offset": 0, 00:36:20.362 "data_size": 65536 00:36:20.362 }, 00:36:20.362 { 00:36:20.362 "name": "BaseBdev2", 00:36:20.362 "uuid": "64df890b-7ed4-476d-8a86-4235e915b1f7", 00:36:20.362 "is_configured": true, 00:36:20.362 "data_offset": 0, 00:36:20.362 "data_size": 65536 00:36:20.362 }, 00:36:20.362 { 00:36:20.362 "name": "BaseBdev3", 00:36:20.362 "uuid": "7691b8ee-5c53-4af5-a975-f949959d5df3", 00:36:20.362 "is_configured": true, 00:36:20.362 "data_offset": 0, 00:36:20.362 "data_size": 65536 00:36:20.362 }, 00:36:20.362 { 00:36:20.362 "name": "BaseBdev4", 00:36:20.362 "uuid": "1d328aa1-adca-4826-9760-82bd38cb1fcc", 00:36:20.362 "is_configured": true, 00:36:20.362 "data_offset": 0, 00:36:20.363 "data_size": 65536 00:36:20.363 } 00:36:20.363 ] 00:36:20.363 }' 00:36:20.363 23:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:20.363 23:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:20.648 [2024-12-09 23:18:01.237993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:20.648 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.907 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:20.907 "name": "Existed_Raid", 00:36:20.907 "aliases": [ 00:36:20.907 "fb62cd82-ec16-487c-a7fa-db5709f7dc65" 00:36:20.907 ], 00:36:20.907 "product_name": "Raid Volume", 00:36:20.907 "block_size": 512, 00:36:20.907 "num_blocks": 65536, 00:36:20.907 "uuid": "fb62cd82-ec16-487c-a7fa-db5709f7dc65", 00:36:20.907 "assigned_rate_limits": { 00:36:20.907 "rw_ios_per_sec": 0, 00:36:20.907 "rw_mbytes_per_sec": 0, 00:36:20.907 "r_mbytes_per_sec": 0, 00:36:20.907 "w_mbytes_per_sec": 0 00:36:20.907 }, 00:36:20.907 "claimed": false, 00:36:20.907 "zoned": false, 00:36:20.907 "supported_io_types": { 00:36:20.907 "read": true, 00:36:20.907 "write": true, 00:36:20.907 "unmap": false, 00:36:20.907 "flush": false, 00:36:20.907 "reset": true, 00:36:20.907 "nvme_admin": false, 00:36:20.907 "nvme_io": false, 00:36:20.907 "nvme_io_md": false, 00:36:20.907 "write_zeroes": true, 00:36:20.907 "zcopy": false, 00:36:20.907 "get_zone_info": false, 00:36:20.907 "zone_management": false, 00:36:20.907 "zone_append": false, 00:36:20.907 "compare": false, 00:36:20.907 "compare_and_write": false, 00:36:20.907 "abort": false, 00:36:20.907 "seek_hole": false, 00:36:20.907 "seek_data": false, 00:36:20.907 "copy": false, 00:36:20.907 "nvme_iov_md": false 00:36:20.907 }, 00:36:20.907 "memory_domains": [ 00:36:20.907 { 00:36:20.907 "dma_device_id": "system", 00:36:20.907 "dma_device_type": 1 00:36:20.907 }, 00:36:20.907 { 00:36:20.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:20.907 "dma_device_type": 2 00:36:20.907 }, 00:36:20.907 { 00:36:20.907 "dma_device_id": "system", 00:36:20.907 "dma_device_type": 1 00:36:20.907 }, 00:36:20.907 { 00:36:20.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:20.907 "dma_device_type": 2 00:36:20.907 }, 00:36:20.907 { 00:36:20.907 "dma_device_id": "system", 00:36:20.907 "dma_device_type": 1 00:36:20.907 }, 00:36:20.907 { 00:36:20.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:20.907 "dma_device_type": 2 00:36:20.907 }, 00:36:20.907 { 00:36:20.907 "dma_device_id": "system", 00:36:20.907 "dma_device_type": 1 00:36:20.907 }, 00:36:20.907 { 00:36:20.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:20.907 "dma_device_type": 2 00:36:20.907 } 00:36:20.907 ], 00:36:20.907 "driver_specific": { 00:36:20.907 "raid": { 00:36:20.907 "uuid": "fb62cd82-ec16-487c-a7fa-db5709f7dc65", 00:36:20.907 "strip_size_kb": 0, 00:36:20.907 "state": "online", 00:36:20.907 "raid_level": "raid1", 00:36:20.907 "superblock": false, 00:36:20.907 "num_base_bdevs": 4, 00:36:20.907 "num_base_bdevs_discovered": 4, 00:36:20.907 "num_base_bdevs_operational": 4, 00:36:20.907 "base_bdevs_list": [ 00:36:20.907 { 00:36:20.907 "name": "NewBaseBdev", 00:36:20.907 "uuid": "a904813c-6dc8-4472-ae4b-d0e2980a844b", 00:36:20.907 "is_configured": true, 00:36:20.907 "data_offset": 0, 00:36:20.907 "data_size": 65536 00:36:20.907 }, 00:36:20.907 { 00:36:20.907 "name": "BaseBdev2", 00:36:20.907 "uuid": "64df890b-7ed4-476d-8a86-4235e915b1f7", 00:36:20.907 "is_configured": true, 00:36:20.907 "data_offset": 0, 00:36:20.907 "data_size": 65536 00:36:20.907 }, 00:36:20.907 { 00:36:20.907 "name": "BaseBdev3", 00:36:20.907 "uuid": "7691b8ee-5c53-4af5-a975-f949959d5df3", 00:36:20.907 "is_configured": true, 00:36:20.907 "data_offset": 0, 00:36:20.907 "data_size": 65536 00:36:20.907 }, 00:36:20.907 { 00:36:20.907 "name": "BaseBdev4", 00:36:20.907 "uuid": "1d328aa1-adca-4826-9760-82bd38cb1fcc", 00:36:20.907 "is_configured": true, 00:36:20.907 "data_offset": 0, 00:36:20.908 "data_size": 65536 00:36:20.908 } 00:36:20.908 ] 00:36:20.908 } 00:36:20.908 } 00:36:20.908 }' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:20.908 BaseBdev2 00:36:20.908 BaseBdev3 00:36:20.908 BaseBdev4' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.908 [2024-12-09 23:18:01.533226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:20.908 [2024-12-09 23:18:01.533258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:20.908 [2024-12-09 23:18:01.533350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:20.908 [2024-12-09 23:18:01.533676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:20.908 [2024-12-09 23:18:01.533696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73044 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73044 ']' 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73044 00:36:20.908 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:36:21.166 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.166 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73044 00:36:21.166 killing process with pid 73044 00:36:21.166 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:21.166 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:21.166 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73044' 00:36:21.166 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73044 00:36:21.166 [2024-12-09 23:18:01.581002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:21.166 23:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73044 00:36:21.425 [2024-12-09 23:18:02.017889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:36:22.807 00:36:22.807 real 0m11.667s 00:36:22.807 user 0m18.403s 00:36:22.807 sys 0m2.292s 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.807 ************************************ 00:36:22.807 END TEST raid_state_function_test 00:36:22.807 ************************************ 00:36:22.807 23:18:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:36:22.807 23:18:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:22.807 23:18:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.807 23:18:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:22.807 ************************************ 00:36:22.807 START TEST raid_state_function_test_sb 00:36:22.807 ************************************ 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73714 00:36:22.807 Process raid pid: 73714 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73714' 00:36:22.807 23:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73714 00:36:22.808 23:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73714 ']' 00:36:22.808 23:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.808 23:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:22.808 23:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.808 23:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:22.808 23:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:22.808 [2024-12-09 23:18:03.420060] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:22.808 [2024-12-09 23:18:03.420191] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.069 [2024-12-09 23:18:03.606453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.327 [2024-12-09 23:18:03.738443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.585 [2024-12-09 23:18:03.972298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:23.585 [2024-12-09 23:18:03.972352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:23.844 [2024-12-09 23:18:04.328029] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:23.844 [2024-12-09 23:18:04.328097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:23.844 [2024-12-09 23:18:04.328111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:23.844 [2024-12-09 23:18:04.328125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:23.844 [2024-12-09 23:18:04.328134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:23.844 [2024-12-09 23:18:04.328147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:23.844 [2024-12-09 23:18:04.328155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:23.844 [2024-12-09 23:18:04.328168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:23.844 "name": "Existed_Raid", 00:36:23.844 "uuid": "53aa84d9-eb8e-4fbf-a49c-d73536a87b5b", 00:36:23.844 "strip_size_kb": 0, 00:36:23.844 "state": "configuring", 00:36:23.844 "raid_level": "raid1", 00:36:23.844 "superblock": true, 00:36:23.844 "num_base_bdevs": 4, 00:36:23.844 "num_base_bdevs_discovered": 0, 00:36:23.844 "num_base_bdevs_operational": 4, 00:36:23.844 "base_bdevs_list": [ 00:36:23.844 { 00:36:23.844 "name": "BaseBdev1", 00:36:23.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.844 "is_configured": false, 00:36:23.844 "data_offset": 0, 00:36:23.844 "data_size": 0 00:36:23.844 }, 00:36:23.844 { 00:36:23.844 "name": "BaseBdev2", 00:36:23.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.844 "is_configured": false, 00:36:23.844 "data_offset": 0, 00:36:23.844 "data_size": 0 00:36:23.844 }, 00:36:23.844 { 00:36:23.844 "name": "BaseBdev3", 00:36:23.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.844 "is_configured": false, 00:36:23.844 "data_offset": 0, 00:36:23.844 "data_size": 0 00:36:23.844 }, 00:36:23.844 { 00:36:23.844 "name": "BaseBdev4", 00:36:23.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.844 "is_configured": false, 00:36:23.844 "data_offset": 0, 00:36:23.844 "data_size": 0 00:36:23.844 } 00:36:23.844 ] 00:36:23.844 }' 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:23.844 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.413 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:24.413 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.413 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.413 [2024-12-09 23:18:04.799358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:24.413 [2024-12-09 23:18:04.799459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:24.413 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.413 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:24.413 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.413 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.413 [2024-12-09 23:18:04.811340] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:24.413 [2024-12-09 23:18:04.811405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:24.413 [2024-12-09 23:18:04.811417] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:24.414 [2024-12-09 23:18:04.811432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:24.414 [2024-12-09 23:18:04.811440] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:24.414 [2024-12-09 23:18:04.811454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:24.414 [2024-12-09 23:18:04.811462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:24.414 [2024-12-09 23:18:04.811475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.414 [2024-12-09 23:18:04.862675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:24.414 BaseBdev1 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.414 [ 00:36:24.414 { 00:36:24.414 "name": "BaseBdev1", 00:36:24.414 "aliases": [ 00:36:24.414 "1621b268-b323-4a56-8a7d-7fb65aeb5ca9" 00:36:24.414 ], 00:36:24.414 "product_name": "Malloc disk", 00:36:24.414 "block_size": 512, 00:36:24.414 "num_blocks": 65536, 00:36:24.414 "uuid": "1621b268-b323-4a56-8a7d-7fb65aeb5ca9", 00:36:24.414 "assigned_rate_limits": { 00:36:24.414 "rw_ios_per_sec": 0, 00:36:24.414 "rw_mbytes_per_sec": 0, 00:36:24.414 "r_mbytes_per_sec": 0, 00:36:24.414 "w_mbytes_per_sec": 0 00:36:24.414 }, 00:36:24.414 "claimed": true, 00:36:24.414 "claim_type": "exclusive_write", 00:36:24.414 "zoned": false, 00:36:24.414 "supported_io_types": { 00:36:24.414 "read": true, 00:36:24.414 "write": true, 00:36:24.414 "unmap": true, 00:36:24.414 "flush": true, 00:36:24.414 "reset": true, 00:36:24.414 "nvme_admin": false, 00:36:24.414 "nvme_io": false, 00:36:24.414 "nvme_io_md": false, 00:36:24.414 "write_zeroes": true, 00:36:24.414 "zcopy": true, 00:36:24.414 "get_zone_info": false, 00:36:24.414 "zone_management": false, 00:36:24.414 "zone_append": false, 00:36:24.414 "compare": false, 00:36:24.414 "compare_and_write": false, 00:36:24.414 "abort": true, 00:36:24.414 "seek_hole": false, 00:36:24.414 "seek_data": false, 00:36:24.414 "copy": true, 00:36:24.414 "nvme_iov_md": false 00:36:24.414 }, 00:36:24.414 "memory_domains": [ 00:36:24.414 { 00:36:24.414 "dma_device_id": "system", 00:36:24.414 "dma_device_type": 1 00:36:24.414 }, 00:36:24.414 { 00:36:24.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:24.414 "dma_device_type": 2 00:36:24.414 } 00:36:24.414 ], 00:36:24.414 "driver_specific": {} 00:36:24.414 } 00:36:24.414 ] 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:24.414 "name": "Existed_Raid", 00:36:24.414 "uuid": "719a7709-76b3-49d2-8645-7832e6a83cba", 00:36:24.414 "strip_size_kb": 0, 00:36:24.414 "state": "configuring", 00:36:24.414 "raid_level": "raid1", 00:36:24.414 "superblock": true, 00:36:24.414 "num_base_bdevs": 4, 00:36:24.414 "num_base_bdevs_discovered": 1, 00:36:24.414 "num_base_bdevs_operational": 4, 00:36:24.414 "base_bdevs_list": [ 00:36:24.414 { 00:36:24.414 "name": "BaseBdev1", 00:36:24.414 "uuid": "1621b268-b323-4a56-8a7d-7fb65aeb5ca9", 00:36:24.414 "is_configured": true, 00:36:24.414 "data_offset": 2048, 00:36:24.414 "data_size": 63488 00:36:24.414 }, 00:36:24.414 { 00:36:24.414 "name": "BaseBdev2", 00:36:24.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.414 "is_configured": false, 00:36:24.414 "data_offset": 0, 00:36:24.414 "data_size": 0 00:36:24.414 }, 00:36:24.414 { 00:36:24.414 "name": "BaseBdev3", 00:36:24.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.414 "is_configured": false, 00:36:24.414 "data_offset": 0, 00:36:24.414 "data_size": 0 00:36:24.414 }, 00:36:24.414 { 00:36:24.414 "name": "BaseBdev4", 00:36:24.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.414 "is_configured": false, 00:36:24.414 "data_offset": 0, 00:36:24.414 "data_size": 0 00:36:24.414 } 00:36:24.414 ] 00:36:24.414 }' 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:24.414 23:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.692 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:24.692 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.692 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.692 [2024-12-09 23:18:05.326439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:24.692 [2024-12-09 23:18:05.326507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.950 [2024-12-09 23:18:05.334500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:24.950 [2024-12-09 23:18:05.336723] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:24.950 [2024-12-09 23:18:05.336773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:24.950 [2024-12-09 23:18:05.336785] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:24.950 [2024-12-09 23:18:05.336801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:24.950 [2024-12-09 23:18:05.336810] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:24.950 [2024-12-09 23:18:05.336822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:24.950 "name": "Existed_Raid", 00:36:24.950 "uuid": "2f64d102-2d35-4c5c-9b6d-8b44f2a4a38b", 00:36:24.950 "strip_size_kb": 0, 00:36:24.950 "state": "configuring", 00:36:24.950 "raid_level": "raid1", 00:36:24.950 "superblock": true, 00:36:24.950 "num_base_bdevs": 4, 00:36:24.950 "num_base_bdevs_discovered": 1, 00:36:24.950 "num_base_bdevs_operational": 4, 00:36:24.950 "base_bdevs_list": [ 00:36:24.950 { 00:36:24.950 "name": "BaseBdev1", 00:36:24.950 "uuid": "1621b268-b323-4a56-8a7d-7fb65aeb5ca9", 00:36:24.950 "is_configured": true, 00:36:24.950 "data_offset": 2048, 00:36:24.950 "data_size": 63488 00:36:24.950 }, 00:36:24.950 { 00:36:24.950 "name": "BaseBdev2", 00:36:24.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.950 "is_configured": false, 00:36:24.950 "data_offset": 0, 00:36:24.950 "data_size": 0 00:36:24.950 }, 00:36:24.950 { 00:36:24.950 "name": "BaseBdev3", 00:36:24.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.950 "is_configured": false, 00:36:24.950 "data_offset": 0, 00:36:24.950 "data_size": 0 00:36:24.950 }, 00:36:24.950 { 00:36:24.950 "name": "BaseBdev4", 00:36:24.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.950 "is_configured": false, 00:36:24.950 "data_offset": 0, 00:36:24.950 "data_size": 0 00:36:24.950 } 00:36:24.950 ] 00:36:24.950 }' 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:24.950 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.208 [2024-12-09 23:18:05.809851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:25.208 BaseBdev2 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.208 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.208 [ 00:36:25.208 { 00:36:25.208 "name": "BaseBdev2", 00:36:25.208 "aliases": [ 00:36:25.208 "0934b309-8a6f-4ce5-9595-709f660d71b1" 00:36:25.208 ], 00:36:25.208 "product_name": "Malloc disk", 00:36:25.208 "block_size": 512, 00:36:25.208 "num_blocks": 65536, 00:36:25.208 "uuid": "0934b309-8a6f-4ce5-9595-709f660d71b1", 00:36:25.466 "assigned_rate_limits": { 00:36:25.466 "rw_ios_per_sec": 0, 00:36:25.466 "rw_mbytes_per_sec": 0, 00:36:25.466 "r_mbytes_per_sec": 0, 00:36:25.466 "w_mbytes_per_sec": 0 00:36:25.466 }, 00:36:25.466 "claimed": true, 00:36:25.466 "claim_type": "exclusive_write", 00:36:25.466 "zoned": false, 00:36:25.466 "supported_io_types": { 00:36:25.466 "read": true, 00:36:25.466 "write": true, 00:36:25.466 "unmap": true, 00:36:25.466 "flush": true, 00:36:25.466 "reset": true, 00:36:25.466 "nvme_admin": false, 00:36:25.466 "nvme_io": false, 00:36:25.466 "nvme_io_md": false, 00:36:25.466 "write_zeroes": true, 00:36:25.466 "zcopy": true, 00:36:25.466 "get_zone_info": false, 00:36:25.466 "zone_management": false, 00:36:25.466 "zone_append": false, 00:36:25.466 "compare": false, 00:36:25.466 "compare_and_write": false, 00:36:25.466 "abort": true, 00:36:25.466 "seek_hole": false, 00:36:25.466 "seek_data": false, 00:36:25.466 "copy": true, 00:36:25.466 "nvme_iov_md": false 00:36:25.466 }, 00:36:25.466 "memory_domains": [ 00:36:25.466 { 00:36:25.466 "dma_device_id": "system", 00:36:25.466 "dma_device_type": 1 00:36:25.466 }, 00:36:25.466 { 00:36:25.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:25.466 "dma_device_type": 2 00:36:25.466 } 00:36:25.466 ], 00:36:25.466 "driver_specific": {} 00:36:25.466 } 00:36:25.466 ] 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:25.466 "name": "Existed_Raid", 00:36:25.466 "uuid": "2f64d102-2d35-4c5c-9b6d-8b44f2a4a38b", 00:36:25.466 "strip_size_kb": 0, 00:36:25.466 "state": "configuring", 00:36:25.466 "raid_level": "raid1", 00:36:25.466 "superblock": true, 00:36:25.466 "num_base_bdevs": 4, 00:36:25.466 "num_base_bdevs_discovered": 2, 00:36:25.466 "num_base_bdevs_operational": 4, 00:36:25.466 "base_bdevs_list": [ 00:36:25.466 { 00:36:25.466 "name": "BaseBdev1", 00:36:25.466 "uuid": "1621b268-b323-4a56-8a7d-7fb65aeb5ca9", 00:36:25.466 "is_configured": true, 00:36:25.466 "data_offset": 2048, 00:36:25.466 "data_size": 63488 00:36:25.466 }, 00:36:25.466 { 00:36:25.466 "name": "BaseBdev2", 00:36:25.466 "uuid": "0934b309-8a6f-4ce5-9595-709f660d71b1", 00:36:25.466 "is_configured": true, 00:36:25.466 "data_offset": 2048, 00:36:25.466 "data_size": 63488 00:36:25.466 }, 00:36:25.466 { 00:36:25.466 "name": "BaseBdev3", 00:36:25.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.466 "is_configured": false, 00:36:25.466 "data_offset": 0, 00:36:25.466 "data_size": 0 00:36:25.466 }, 00:36:25.466 { 00:36:25.466 "name": "BaseBdev4", 00:36:25.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.466 "is_configured": false, 00:36:25.466 "data_offset": 0, 00:36:25.466 "data_size": 0 00:36:25.466 } 00:36:25.466 ] 00:36:25.466 }' 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:25.466 23:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.725 [2024-12-09 23:18:06.327159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:25.725 BaseBdev3 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.725 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.725 [ 00:36:25.725 { 00:36:25.725 "name": "BaseBdev3", 00:36:25.725 "aliases": [ 00:36:25.725 "9c5c1276-b197-4d66-a94a-328b2ea96f51" 00:36:25.725 ], 00:36:25.725 "product_name": "Malloc disk", 00:36:25.725 "block_size": 512, 00:36:25.725 "num_blocks": 65536, 00:36:25.725 "uuid": "9c5c1276-b197-4d66-a94a-328b2ea96f51", 00:36:25.725 "assigned_rate_limits": { 00:36:25.725 "rw_ios_per_sec": 0, 00:36:25.725 "rw_mbytes_per_sec": 0, 00:36:25.983 "r_mbytes_per_sec": 0, 00:36:25.983 "w_mbytes_per_sec": 0 00:36:25.983 }, 00:36:25.983 "claimed": true, 00:36:25.983 "claim_type": "exclusive_write", 00:36:25.983 "zoned": false, 00:36:25.983 "supported_io_types": { 00:36:25.983 "read": true, 00:36:25.983 "write": true, 00:36:25.983 "unmap": true, 00:36:25.983 "flush": true, 00:36:25.983 "reset": true, 00:36:25.983 "nvme_admin": false, 00:36:25.983 "nvme_io": false, 00:36:25.983 "nvme_io_md": false, 00:36:25.983 "write_zeroes": true, 00:36:25.983 "zcopy": true, 00:36:25.983 "get_zone_info": false, 00:36:25.983 "zone_management": false, 00:36:25.983 "zone_append": false, 00:36:25.983 "compare": false, 00:36:25.983 "compare_and_write": false, 00:36:25.983 "abort": true, 00:36:25.983 "seek_hole": false, 00:36:25.983 "seek_data": false, 00:36:25.983 "copy": true, 00:36:25.983 "nvme_iov_md": false 00:36:25.983 }, 00:36:25.983 "memory_domains": [ 00:36:25.983 { 00:36:25.983 "dma_device_id": "system", 00:36:25.983 "dma_device_type": 1 00:36:25.983 }, 00:36:25.983 { 00:36:25.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:25.983 "dma_device_type": 2 00:36:25.983 } 00:36:25.983 ], 00:36:25.983 "driver_specific": {} 00:36:25.983 } 00:36:25.983 ] 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.983 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:25.983 "name": "Existed_Raid", 00:36:25.983 "uuid": "2f64d102-2d35-4c5c-9b6d-8b44f2a4a38b", 00:36:25.983 "strip_size_kb": 0, 00:36:25.983 "state": "configuring", 00:36:25.983 "raid_level": "raid1", 00:36:25.983 "superblock": true, 00:36:25.983 "num_base_bdevs": 4, 00:36:25.984 "num_base_bdevs_discovered": 3, 00:36:25.984 "num_base_bdevs_operational": 4, 00:36:25.984 "base_bdevs_list": [ 00:36:25.984 { 00:36:25.984 "name": "BaseBdev1", 00:36:25.984 "uuid": "1621b268-b323-4a56-8a7d-7fb65aeb5ca9", 00:36:25.984 "is_configured": true, 00:36:25.984 "data_offset": 2048, 00:36:25.984 "data_size": 63488 00:36:25.984 }, 00:36:25.984 { 00:36:25.984 "name": "BaseBdev2", 00:36:25.984 "uuid": "0934b309-8a6f-4ce5-9595-709f660d71b1", 00:36:25.984 "is_configured": true, 00:36:25.984 "data_offset": 2048, 00:36:25.984 "data_size": 63488 00:36:25.984 }, 00:36:25.984 { 00:36:25.984 "name": "BaseBdev3", 00:36:25.984 "uuid": "9c5c1276-b197-4d66-a94a-328b2ea96f51", 00:36:25.984 "is_configured": true, 00:36:25.984 "data_offset": 2048, 00:36:25.984 "data_size": 63488 00:36:25.984 }, 00:36:25.984 { 00:36:25.984 "name": "BaseBdev4", 00:36:25.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.984 "is_configured": false, 00:36:25.984 "data_offset": 0, 00:36:25.984 "data_size": 0 00:36:25.984 } 00:36:25.984 ] 00:36:25.984 }' 00:36:25.984 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:25.984 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.242 [2024-12-09 23:18:06.807004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:26.242 [2024-12-09 23:18:06.807314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:26.242 [2024-12-09 23:18:06.807335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:26.242 [2024-12-09 23:18:06.807734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:26.242 BaseBdev4 00:36:26.242 [2024-12-09 23:18:06.807922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:26.242 [2024-12-09 23:18:06.807938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:26.242 [2024-12-09 23:18:06.808085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:26.242 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.243 [ 00:36:26.243 { 00:36:26.243 "name": "BaseBdev4", 00:36:26.243 "aliases": [ 00:36:26.243 "b771aa18-1695-462a-bf30-98d507b01abf" 00:36:26.243 ], 00:36:26.243 "product_name": "Malloc disk", 00:36:26.243 "block_size": 512, 00:36:26.243 "num_blocks": 65536, 00:36:26.243 "uuid": "b771aa18-1695-462a-bf30-98d507b01abf", 00:36:26.243 "assigned_rate_limits": { 00:36:26.243 "rw_ios_per_sec": 0, 00:36:26.243 "rw_mbytes_per_sec": 0, 00:36:26.243 "r_mbytes_per_sec": 0, 00:36:26.243 "w_mbytes_per_sec": 0 00:36:26.243 }, 00:36:26.243 "claimed": true, 00:36:26.243 "claim_type": "exclusive_write", 00:36:26.243 "zoned": false, 00:36:26.243 "supported_io_types": { 00:36:26.243 "read": true, 00:36:26.243 "write": true, 00:36:26.243 "unmap": true, 00:36:26.243 "flush": true, 00:36:26.243 "reset": true, 00:36:26.243 "nvme_admin": false, 00:36:26.243 "nvme_io": false, 00:36:26.243 "nvme_io_md": false, 00:36:26.243 "write_zeroes": true, 00:36:26.243 "zcopy": true, 00:36:26.243 "get_zone_info": false, 00:36:26.243 "zone_management": false, 00:36:26.243 "zone_append": false, 00:36:26.243 "compare": false, 00:36:26.243 "compare_and_write": false, 00:36:26.243 "abort": true, 00:36:26.243 "seek_hole": false, 00:36:26.243 "seek_data": false, 00:36:26.243 "copy": true, 00:36:26.243 "nvme_iov_md": false 00:36:26.243 }, 00:36:26.243 "memory_domains": [ 00:36:26.243 { 00:36:26.243 "dma_device_id": "system", 00:36:26.243 "dma_device_type": 1 00:36:26.243 }, 00:36:26.243 { 00:36:26.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:26.243 "dma_device_type": 2 00:36:26.243 } 00:36:26.243 ], 00:36:26.243 "driver_specific": {} 00:36:26.243 } 00:36:26.243 ] 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.243 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.507 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.507 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:26.507 "name": "Existed_Raid", 00:36:26.507 "uuid": "2f64d102-2d35-4c5c-9b6d-8b44f2a4a38b", 00:36:26.507 "strip_size_kb": 0, 00:36:26.507 "state": "online", 00:36:26.507 "raid_level": "raid1", 00:36:26.507 "superblock": true, 00:36:26.507 "num_base_bdevs": 4, 00:36:26.507 "num_base_bdevs_discovered": 4, 00:36:26.507 "num_base_bdevs_operational": 4, 00:36:26.507 "base_bdevs_list": [ 00:36:26.507 { 00:36:26.507 "name": "BaseBdev1", 00:36:26.507 "uuid": "1621b268-b323-4a56-8a7d-7fb65aeb5ca9", 00:36:26.507 "is_configured": true, 00:36:26.507 "data_offset": 2048, 00:36:26.507 "data_size": 63488 00:36:26.507 }, 00:36:26.507 { 00:36:26.507 "name": "BaseBdev2", 00:36:26.507 "uuid": "0934b309-8a6f-4ce5-9595-709f660d71b1", 00:36:26.507 "is_configured": true, 00:36:26.507 "data_offset": 2048, 00:36:26.507 "data_size": 63488 00:36:26.507 }, 00:36:26.507 { 00:36:26.507 "name": "BaseBdev3", 00:36:26.507 "uuid": "9c5c1276-b197-4d66-a94a-328b2ea96f51", 00:36:26.507 "is_configured": true, 00:36:26.507 "data_offset": 2048, 00:36:26.507 "data_size": 63488 00:36:26.507 }, 00:36:26.507 { 00:36:26.507 "name": "BaseBdev4", 00:36:26.507 "uuid": "b771aa18-1695-462a-bf30-98d507b01abf", 00:36:26.507 "is_configured": true, 00:36:26.508 "data_offset": 2048, 00:36:26.508 "data_size": 63488 00:36:26.508 } 00:36:26.508 ] 00:36:26.508 }' 00:36:26.508 23:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:26.508 23:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.766 [2024-12-09 23:18:07.286845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.766 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:26.766 "name": "Existed_Raid", 00:36:26.766 "aliases": [ 00:36:26.766 "2f64d102-2d35-4c5c-9b6d-8b44f2a4a38b" 00:36:26.766 ], 00:36:26.766 "product_name": "Raid Volume", 00:36:26.766 "block_size": 512, 00:36:26.766 "num_blocks": 63488, 00:36:26.766 "uuid": "2f64d102-2d35-4c5c-9b6d-8b44f2a4a38b", 00:36:26.766 "assigned_rate_limits": { 00:36:26.766 "rw_ios_per_sec": 0, 00:36:26.766 "rw_mbytes_per_sec": 0, 00:36:26.766 "r_mbytes_per_sec": 0, 00:36:26.766 "w_mbytes_per_sec": 0 00:36:26.766 }, 00:36:26.766 "claimed": false, 00:36:26.766 "zoned": false, 00:36:26.766 "supported_io_types": { 00:36:26.766 "read": true, 00:36:26.766 "write": true, 00:36:26.766 "unmap": false, 00:36:26.766 "flush": false, 00:36:26.766 "reset": true, 00:36:26.766 "nvme_admin": false, 00:36:26.766 "nvme_io": false, 00:36:26.766 "nvme_io_md": false, 00:36:26.766 "write_zeroes": true, 00:36:26.766 "zcopy": false, 00:36:26.766 "get_zone_info": false, 00:36:26.766 "zone_management": false, 00:36:26.766 "zone_append": false, 00:36:26.766 "compare": false, 00:36:26.767 "compare_and_write": false, 00:36:26.767 "abort": false, 00:36:26.767 "seek_hole": false, 00:36:26.767 "seek_data": false, 00:36:26.767 "copy": false, 00:36:26.767 "nvme_iov_md": false 00:36:26.767 }, 00:36:26.767 "memory_domains": [ 00:36:26.767 { 00:36:26.767 "dma_device_id": "system", 00:36:26.767 "dma_device_type": 1 00:36:26.767 }, 00:36:26.767 { 00:36:26.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:26.767 "dma_device_type": 2 00:36:26.767 }, 00:36:26.767 { 00:36:26.767 "dma_device_id": "system", 00:36:26.767 "dma_device_type": 1 00:36:26.767 }, 00:36:26.767 { 00:36:26.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:26.767 "dma_device_type": 2 00:36:26.767 }, 00:36:26.767 { 00:36:26.767 "dma_device_id": "system", 00:36:26.767 "dma_device_type": 1 00:36:26.767 }, 00:36:26.767 { 00:36:26.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:26.767 "dma_device_type": 2 00:36:26.767 }, 00:36:26.767 { 00:36:26.767 "dma_device_id": "system", 00:36:26.767 "dma_device_type": 1 00:36:26.767 }, 00:36:26.767 { 00:36:26.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:26.767 "dma_device_type": 2 00:36:26.767 } 00:36:26.767 ], 00:36:26.767 "driver_specific": { 00:36:26.767 "raid": { 00:36:26.767 "uuid": "2f64d102-2d35-4c5c-9b6d-8b44f2a4a38b", 00:36:26.767 "strip_size_kb": 0, 00:36:26.767 "state": "online", 00:36:26.767 "raid_level": "raid1", 00:36:26.767 "superblock": true, 00:36:26.767 "num_base_bdevs": 4, 00:36:26.767 "num_base_bdevs_discovered": 4, 00:36:26.767 "num_base_bdevs_operational": 4, 00:36:26.767 "base_bdevs_list": [ 00:36:26.767 { 00:36:26.767 "name": "BaseBdev1", 00:36:26.767 "uuid": "1621b268-b323-4a56-8a7d-7fb65aeb5ca9", 00:36:26.767 "is_configured": true, 00:36:26.767 "data_offset": 2048, 00:36:26.767 "data_size": 63488 00:36:26.767 }, 00:36:26.767 { 00:36:26.767 "name": "BaseBdev2", 00:36:26.767 "uuid": "0934b309-8a6f-4ce5-9595-709f660d71b1", 00:36:26.767 "is_configured": true, 00:36:26.767 "data_offset": 2048, 00:36:26.767 "data_size": 63488 00:36:26.767 }, 00:36:26.767 { 00:36:26.767 "name": "BaseBdev3", 00:36:26.767 "uuid": "9c5c1276-b197-4d66-a94a-328b2ea96f51", 00:36:26.767 "is_configured": true, 00:36:26.767 "data_offset": 2048, 00:36:26.767 "data_size": 63488 00:36:26.767 }, 00:36:26.767 { 00:36:26.767 "name": "BaseBdev4", 00:36:26.767 "uuid": "b771aa18-1695-462a-bf30-98d507b01abf", 00:36:26.767 "is_configured": true, 00:36:26.767 "data_offset": 2048, 00:36:26.767 "data_size": 63488 00:36:26.767 } 00:36:26.767 ] 00:36:26.767 } 00:36:26.767 } 00:36:26.767 }' 00:36:26.767 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:26.767 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:26.767 BaseBdev2 00:36:26.767 BaseBdev3 00:36:26.767 BaseBdev4' 00:36:26.767 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.025 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:27.026 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:27.026 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:27.026 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.026 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.026 [2024-12-09 23:18:07.590518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:27.283 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.283 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:27.283 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:36:27.283 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:27.283 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:36:27.283 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:36:27.283 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:36:27.283 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:27.283 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:27.284 "name": "Existed_Raid", 00:36:27.284 "uuid": "2f64d102-2d35-4c5c-9b6d-8b44f2a4a38b", 00:36:27.284 "strip_size_kb": 0, 00:36:27.284 "state": "online", 00:36:27.284 "raid_level": "raid1", 00:36:27.284 "superblock": true, 00:36:27.284 "num_base_bdevs": 4, 00:36:27.284 "num_base_bdevs_discovered": 3, 00:36:27.284 "num_base_bdevs_operational": 3, 00:36:27.284 "base_bdevs_list": [ 00:36:27.284 { 00:36:27.284 "name": null, 00:36:27.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:27.284 "is_configured": false, 00:36:27.284 "data_offset": 0, 00:36:27.284 "data_size": 63488 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "name": "BaseBdev2", 00:36:27.284 "uuid": "0934b309-8a6f-4ce5-9595-709f660d71b1", 00:36:27.284 "is_configured": true, 00:36:27.284 "data_offset": 2048, 00:36:27.284 "data_size": 63488 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "name": "BaseBdev3", 00:36:27.284 "uuid": "9c5c1276-b197-4d66-a94a-328b2ea96f51", 00:36:27.284 "is_configured": true, 00:36:27.284 "data_offset": 2048, 00:36:27.284 "data_size": 63488 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "name": "BaseBdev4", 00:36:27.284 "uuid": "b771aa18-1695-462a-bf30-98d507b01abf", 00:36:27.284 "is_configured": true, 00:36:27.284 "data_offset": 2048, 00:36:27.284 "data_size": 63488 00:36:27.284 } 00:36:27.284 ] 00:36:27.284 }' 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:27.284 23:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.541 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:27.541 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:27.541 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:27.541 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:27.541 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.541 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.541 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.798 [2024-12-09 23:18:08.199596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.798 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.798 [2024-12-09 23:18:08.362711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.056 [2024-12-09 23:18:08.519079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:36:28.056 [2024-12-09 23:18:08.519185] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:28.056 [2024-12-09 23:18:08.622786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:28.056 [2024-12-09 23:18:08.622857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:28.056 [2024-12-09 23:18:08.622874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.056 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.315 BaseBdev2 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.315 [ 00:36:28.315 { 00:36:28.315 "name": "BaseBdev2", 00:36:28.315 "aliases": [ 00:36:28.315 "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb" 00:36:28.315 ], 00:36:28.315 "product_name": "Malloc disk", 00:36:28.315 "block_size": 512, 00:36:28.315 "num_blocks": 65536, 00:36:28.315 "uuid": "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb", 00:36:28.315 "assigned_rate_limits": { 00:36:28.315 "rw_ios_per_sec": 0, 00:36:28.315 "rw_mbytes_per_sec": 0, 00:36:28.315 "r_mbytes_per_sec": 0, 00:36:28.315 "w_mbytes_per_sec": 0 00:36:28.315 }, 00:36:28.315 "claimed": false, 00:36:28.315 "zoned": false, 00:36:28.315 "supported_io_types": { 00:36:28.315 "read": true, 00:36:28.315 "write": true, 00:36:28.315 "unmap": true, 00:36:28.315 "flush": true, 00:36:28.315 "reset": true, 00:36:28.315 "nvme_admin": false, 00:36:28.315 "nvme_io": false, 00:36:28.315 "nvme_io_md": false, 00:36:28.315 "write_zeroes": true, 00:36:28.315 "zcopy": true, 00:36:28.315 "get_zone_info": false, 00:36:28.315 "zone_management": false, 00:36:28.315 "zone_append": false, 00:36:28.315 "compare": false, 00:36:28.315 "compare_and_write": false, 00:36:28.315 "abort": true, 00:36:28.315 "seek_hole": false, 00:36:28.315 "seek_data": false, 00:36:28.315 "copy": true, 00:36:28.315 "nvme_iov_md": false 00:36:28.315 }, 00:36:28.315 "memory_domains": [ 00:36:28.315 { 00:36:28.315 "dma_device_id": "system", 00:36:28.315 "dma_device_type": 1 00:36:28.315 }, 00:36:28.315 { 00:36:28.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:28.315 "dma_device_type": 2 00:36:28.315 } 00:36:28.315 ], 00:36:28.315 "driver_specific": {} 00:36:28.315 } 00:36:28.315 ] 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.315 BaseBdev3 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.315 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.315 [ 00:36:28.315 { 00:36:28.315 "name": "BaseBdev3", 00:36:28.315 "aliases": [ 00:36:28.315 "afa20123-5bba-4ac2-b75c-05dbbda4c3ea" 00:36:28.315 ], 00:36:28.315 "product_name": "Malloc disk", 00:36:28.315 "block_size": 512, 00:36:28.315 "num_blocks": 65536, 00:36:28.315 "uuid": "afa20123-5bba-4ac2-b75c-05dbbda4c3ea", 00:36:28.315 "assigned_rate_limits": { 00:36:28.315 "rw_ios_per_sec": 0, 00:36:28.315 "rw_mbytes_per_sec": 0, 00:36:28.315 "r_mbytes_per_sec": 0, 00:36:28.315 "w_mbytes_per_sec": 0 00:36:28.315 }, 00:36:28.315 "claimed": false, 00:36:28.315 "zoned": false, 00:36:28.315 "supported_io_types": { 00:36:28.315 "read": true, 00:36:28.315 "write": true, 00:36:28.315 "unmap": true, 00:36:28.315 "flush": true, 00:36:28.315 "reset": true, 00:36:28.315 "nvme_admin": false, 00:36:28.315 "nvme_io": false, 00:36:28.315 "nvme_io_md": false, 00:36:28.315 "write_zeroes": true, 00:36:28.315 "zcopy": true, 00:36:28.315 "get_zone_info": false, 00:36:28.315 "zone_management": false, 00:36:28.315 "zone_append": false, 00:36:28.315 "compare": false, 00:36:28.315 "compare_and_write": false, 00:36:28.315 "abort": true, 00:36:28.315 "seek_hole": false, 00:36:28.315 "seek_data": false, 00:36:28.315 "copy": true, 00:36:28.315 "nvme_iov_md": false 00:36:28.315 }, 00:36:28.315 "memory_domains": [ 00:36:28.315 { 00:36:28.316 "dma_device_id": "system", 00:36:28.316 "dma_device_type": 1 00:36:28.316 }, 00:36:28.316 { 00:36:28.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:28.316 "dma_device_type": 2 00:36:28.316 } 00:36:28.316 ], 00:36:28.316 "driver_specific": {} 00:36:28.316 } 00:36:28.316 ] 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.316 BaseBdev4 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.316 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.316 [ 00:36:28.316 { 00:36:28.316 "name": "BaseBdev4", 00:36:28.316 "aliases": [ 00:36:28.316 "849ed2a1-2723-4557-bcba-be25b2d4684f" 00:36:28.316 ], 00:36:28.316 "product_name": "Malloc disk", 00:36:28.316 "block_size": 512, 00:36:28.316 "num_blocks": 65536, 00:36:28.316 "uuid": "849ed2a1-2723-4557-bcba-be25b2d4684f", 00:36:28.316 "assigned_rate_limits": { 00:36:28.316 "rw_ios_per_sec": 0, 00:36:28.316 "rw_mbytes_per_sec": 0, 00:36:28.316 "r_mbytes_per_sec": 0, 00:36:28.316 "w_mbytes_per_sec": 0 00:36:28.316 }, 00:36:28.316 "claimed": false, 00:36:28.316 "zoned": false, 00:36:28.316 "supported_io_types": { 00:36:28.316 "read": true, 00:36:28.316 "write": true, 00:36:28.316 "unmap": true, 00:36:28.316 "flush": true, 00:36:28.316 "reset": true, 00:36:28.316 "nvme_admin": false, 00:36:28.316 "nvme_io": false, 00:36:28.316 "nvme_io_md": false, 00:36:28.316 "write_zeroes": true, 00:36:28.316 "zcopy": true, 00:36:28.316 "get_zone_info": false, 00:36:28.316 "zone_management": false, 00:36:28.316 "zone_append": false, 00:36:28.316 "compare": false, 00:36:28.316 "compare_and_write": false, 00:36:28.316 "abort": true, 00:36:28.316 "seek_hole": false, 00:36:28.316 "seek_data": false, 00:36:28.316 "copy": true, 00:36:28.316 "nvme_iov_md": false 00:36:28.316 }, 00:36:28.316 "memory_domains": [ 00:36:28.316 { 00:36:28.316 "dma_device_id": "system", 00:36:28.316 "dma_device_type": 1 00:36:28.316 }, 00:36:28.316 { 00:36:28.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:28.316 "dma_device_type": 2 00:36:28.316 } 00:36:28.316 ], 00:36:28.316 "driver_specific": {} 00:36:28.316 } 00:36:28.316 ] 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.575 [2024-12-09 23:18:08.955083] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:28.575 [2024-12-09 23:18:08.955139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:28.575 [2024-12-09 23:18:08.955168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:28.575 [2024-12-09 23:18:08.957509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:28.575 [2024-12-09 23:18:08.957566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:28.575 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:28.576 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:28.576 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:28.576 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:28.576 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:28.576 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:28.576 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.576 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.576 23:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:28.576 23:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.576 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:28.576 "name": "Existed_Raid", 00:36:28.576 "uuid": "c5b7480e-90b6-446d-a549-4aced6c6e125", 00:36:28.576 "strip_size_kb": 0, 00:36:28.576 "state": "configuring", 00:36:28.576 "raid_level": "raid1", 00:36:28.576 "superblock": true, 00:36:28.576 "num_base_bdevs": 4, 00:36:28.576 "num_base_bdevs_discovered": 3, 00:36:28.576 "num_base_bdevs_operational": 4, 00:36:28.576 "base_bdevs_list": [ 00:36:28.576 { 00:36:28.576 "name": "BaseBdev1", 00:36:28.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:28.576 "is_configured": false, 00:36:28.576 "data_offset": 0, 00:36:28.576 "data_size": 0 00:36:28.576 }, 00:36:28.576 { 00:36:28.576 "name": "BaseBdev2", 00:36:28.576 "uuid": "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb", 00:36:28.576 "is_configured": true, 00:36:28.576 "data_offset": 2048, 00:36:28.576 "data_size": 63488 00:36:28.576 }, 00:36:28.576 { 00:36:28.576 "name": "BaseBdev3", 00:36:28.576 "uuid": "afa20123-5bba-4ac2-b75c-05dbbda4c3ea", 00:36:28.576 "is_configured": true, 00:36:28.576 "data_offset": 2048, 00:36:28.576 "data_size": 63488 00:36:28.576 }, 00:36:28.576 { 00:36:28.576 "name": "BaseBdev4", 00:36:28.576 "uuid": "849ed2a1-2723-4557-bcba-be25b2d4684f", 00:36:28.576 "is_configured": true, 00:36:28.576 "data_offset": 2048, 00:36:28.576 "data_size": 63488 00:36:28.576 } 00:36:28.576 ] 00:36:28.576 }' 00:36:28.576 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:28.576 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.836 [2024-12-09 23:18:09.394514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:28.836 "name": "Existed_Raid", 00:36:28.836 "uuid": "c5b7480e-90b6-446d-a549-4aced6c6e125", 00:36:28.836 "strip_size_kb": 0, 00:36:28.836 "state": "configuring", 00:36:28.836 "raid_level": "raid1", 00:36:28.836 "superblock": true, 00:36:28.836 "num_base_bdevs": 4, 00:36:28.836 "num_base_bdevs_discovered": 2, 00:36:28.836 "num_base_bdevs_operational": 4, 00:36:28.836 "base_bdevs_list": [ 00:36:28.836 { 00:36:28.836 "name": "BaseBdev1", 00:36:28.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:28.836 "is_configured": false, 00:36:28.836 "data_offset": 0, 00:36:28.836 "data_size": 0 00:36:28.836 }, 00:36:28.836 { 00:36:28.836 "name": null, 00:36:28.836 "uuid": "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb", 00:36:28.836 "is_configured": false, 00:36:28.836 "data_offset": 0, 00:36:28.836 "data_size": 63488 00:36:28.836 }, 00:36:28.836 { 00:36:28.836 "name": "BaseBdev3", 00:36:28.836 "uuid": "afa20123-5bba-4ac2-b75c-05dbbda4c3ea", 00:36:28.836 "is_configured": true, 00:36:28.836 "data_offset": 2048, 00:36:28.836 "data_size": 63488 00:36:28.836 }, 00:36:28.836 { 00:36:28.836 "name": "BaseBdev4", 00:36:28.836 "uuid": "849ed2a1-2723-4557-bcba-be25b2d4684f", 00:36:28.836 "is_configured": true, 00:36:28.836 "data_offset": 2048, 00:36:28.836 "data_size": 63488 00:36:28.836 } 00:36:28.836 ] 00:36:28.836 }' 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:28.836 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.405 [2024-12-09 23:18:09.875114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:29.405 BaseBdev1 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.405 [ 00:36:29.405 { 00:36:29.405 "name": "BaseBdev1", 00:36:29.405 "aliases": [ 00:36:29.405 "1e0b4ec0-7141-48b8-9874-0f5d943baaba" 00:36:29.405 ], 00:36:29.405 "product_name": "Malloc disk", 00:36:29.405 "block_size": 512, 00:36:29.405 "num_blocks": 65536, 00:36:29.405 "uuid": "1e0b4ec0-7141-48b8-9874-0f5d943baaba", 00:36:29.405 "assigned_rate_limits": { 00:36:29.405 "rw_ios_per_sec": 0, 00:36:29.405 "rw_mbytes_per_sec": 0, 00:36:29.405 "r_mbytes_per_sec": 0, 00:36:29.405 "w_mbytes_per_sec": 0 00:36:29.405 }, 00:36:29.405 "claimed": true, 00:36:29.405 "claim_type": "exclusive_write", 00:36:29.405 "zoned": false, 00:36:29.405 "supported_io_types": { 00:36:29.405 "read": true, 00:36:29.405 "write": true, 00:36:29.405 "unmap": true, 00:36:29.405 "flush": true, 00:36:29.405 "reset": true, 00:36:29.405 "nvme_admin": false, 00:36:29.405 "nvme_io": false, 00:36:29.405 "nvme_io_md": false, 00:36:29.405 "write_zeroes": true, 00:36:29.405 "zcopy": true, 00:36:29.405 "get_zone_info": false, 00:36:29.405 "zone_management": false, 00:36:29.405 "zone_append": false, 00:36:29.405 "compare": false, 00:36:29.405 "compare_and_write": false, 00:36:29.405 "abort": true, 00:36:29.405 "seek_hole": false, 00:36:29.405 "seek_data": false, 00:36:29.405 "copy": true, 00:36:29.405 "nvme_iov_md": false 00:36:29.405 }, 00:36:29.405 "memory_domains": [ 00:36:29.405 { 00:36:29.405 "dma_device_id": "system", 00:36:29.405 "dma_device_type": 1 00:36:29.405 }, 00:36:29.405 { 00:36:29.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:29.405 "dma_device_type": 2 00:36:29.405 } 00:36:29.405 ], 00:36:29.405 "driver_specific": {} 00:36:29.405 } 00:36:29.405 ] 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.405 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:29.405 "name": "Existed_Raid", 00:36:29.405 "uuid": "c5b7480e-90b6-446d-a549-4aced6c6e125", 00:36:29.405 "strip_size_kb": 0, 00:36:29.405 "state": "configuring", 00:36:29.405 "raid_level": "raid1", 00:36:29.405 "superblock": true, 00:36:29.405 "num_base_bdevs": 4, 00:36:29.405 "num_base_bdevs_discovered": 3, 00:36:29.405 "num_base_bdevs_operational": 4, 00:36:29.405 "base_bdevs_list": [ 00:36:29.405 { 00:36:29.405 "name": "BaseBdev1", 00:36:29.405 "uuid": "1e0b4ec0-7141-48b8-9874-0f5d943baaba", 00:36:29.406 "is_configured": true, 00:36:29.406 "data_offset": 2048, 00:36:29.406 "data_size": 63488 00:36:29.406 }, 00:36:29.406 { 00:36:29.406 "name": null, 00:36:29.406 "uuid": "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb", 00:36:29.406 "is_configured": false, 00:36:29.406 "data_offset": 0, 00:36:29.406 "data_size": 63488 00:36:29.406 }, 00:36:29.406 { 00:36:29.406 "name": "BaseBdev3", 00:36:29.406 "uuid": "afa20123-5bba-4ac2-b75c-05dbbda4c3ea", 00:36:29.406 "is_configured": true, 00:36:29.406 "data_offset": 2048, 00:36:29.406 "data_size": 63488 00:36:29.406 }, 00:36:29.406 { 00:36:29.406 "name": "BaseBdev4", 00:36:29.406 "uuid": "849ed2a1-2723-4557-bcba-be25b2d4684f", 00:36:29.406 "is_configured": true, 00:36:29.406 "data_offset": 2048, 00:36:29.406 "data_size": 63488 00:36:29.406 } 00:36:29.406 ] 00:36:29.406 }' 00:36:29.406 23:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:29.406 23:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.665 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:29.665 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.665 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:29.665 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.924 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.924 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:29.924 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:29.924 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.924 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.924 [2024-12-09 23:18:10.342582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:29.924 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.924 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:29.925 "name": "Existed_Raid", 00:36:29.925 "uuid": "c5b7480e-90b6-446d-a549-4aced6c6e125", 00:36:29.925 "strip_size_kb": 0, 00:36:29.925 "state": "configuring", 00:36:29.925 "raid_level": "raid1", 00:36:29.925 "superblock": true, 00:36:29.925 "num_base_bdevs": 4, 00:36:29.925 "num_base_bdevs_discovered": 2, 00:36:29.925 "num_base_bdevs_operational": 4, 00:36:29.925 "base_bdevs_list": [ 00:36:29.925 { 00:36:29.925 "name": "BaseBdev1", 00:36:29.925 "uuid": "1e0b4ec0-7141-48b8-9874-0f5d943baaba", 00:36:29.925 "is_configured": true, 00:36:29.925 "data_offset": 2048, 00:36:29.925 "data_size": 63488 00:36:29.925 }, 00:36:29.925 { 00:36:29.925 "name": null, 00:36:29.925 "uuid": "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb", 00:36:29.925 "is_configured": false, 00:36:29.925 "data_offset": 0, 00:36:29.925 "data_size": 63488 00:36:29.925 }, 00:36:29.925 { 00:36:29.925 "name": null, 00:36:29.925 "uuid": "afa20123-5bba-4ac2-b75c-05dbbda4c3ea", 00:36:29.925 "is_configured": false, 00:36:29.925 "data_offset": 0, 00:36:29.925 "data_size": 63488 00:36:29.925 }, 00:36:29.925 { 00:36:29.925 "name": "BaseBdev4", 00:36:29.925 "uuid": "849ed2a1-2723-4557-bcba-be25b2d4684f", 00:36:29.925 "is_configured": true, 00:36:29.925 "data_offset": 2048, 00:36:29.925 "data_size": 63488 00:36:29.925 } 00:36:29.925 ] 00:36:29.925 }' 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:29.925 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.189 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.189 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.189 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.189 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:30.189 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.449 [2024-12-09 23:18:10.834091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:30.449 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:30.450 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:30.450 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.450 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:30.450 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.450 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.450 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.450 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:30.450 "name": "Existed_Raid", 00:36:30.450 "uuid": "c5b7480e-90b6-446d-a549-4aced6c6e125", 00:36:30.450 "strip_size_kb": 0, 00:36:30.450 "state": "configuring", 00:36:30.450 "raid_level": "raid1", 00:36:30.450 "superblock": true, 00:36:30.450 "num_base_bdevs": 4, 00:36:30.450 "num_base_bdevs_discovered": 3, 00:36:30.450 "num_base_bdevs_operational": 4, 00:36:30.450 "base_bdevs_list": [ 00:36:30.450 { 00:36:30.450 "name": "BaseBdev1", 00:36:30.450 "uuid": "1e0b4ec0-7141-48b8-9874-0f5d943baaba", 00:36:30.450 "is_configured": true, 00:36:30.450 "data_offset": 2048, 00:36:30.450 "data_size": 63488 00:36:30.450 }, 00:36:30.450 { 00:36:30.450 "name": null, 00:36:30.450 "uuid": "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb", 00:36:30.450 "is_configured": false, 00:36:30.450 "data_offset": 0, 00:36:30.450 "data_size": 63488 00:36:30.450 }, 00:36:30.450 { 00:36:30.450 "name": "BaseBdev3", 00:36:30.450 "uuid": "afa20123-5bba-4ac2-b75c-05dbbda4c3ea", 00:36:30.450 "is_configured": true, 00:36:30.450 "data_offset": 2048, 00:36:30.450 "data_size": 63488 00:36:30.450 }, 00:36:30.450 { 00:36:30.450 "name": "BaseBdev4", 00:36:30.450 "uuid": "849ed2a1-2723-4557-bcba-be25b2d4684f", 00:36:30.450 "is_configured": true, 00:36:30.450 "data_offset": 2048, 00:36:30.450 "data_size": 63488 00:36:30.450 } 00:36:30.450 ] 00:36:30.450 }' 00:36:30.450 23:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:30.450 23:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.709 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:30.709 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.709 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.709 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.709 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.709 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:30.709 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:30.709 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.709 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.709 [2024-12-09 23:18:11.309608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:30.967 "name": "Existed_Raid", 00:36:30.967 "uuid": "c5b7480e-90b6-446d-a549-4aced6c6e125", 00:36:30.967 "strip_size_kb": 0, 00:36:30.967 "state": "configuring", 00:36:30.967 "raid_level": "raid1", 00:36:30.967 "superblock": true, 00:36:30.967 "num_base_bdevs": 4, 00:36:30.967 "num_base_bdevs_discovered": 2, 00:36:30.967 "num_base_bdevs_operational": 4, 00:36:30.967 "base_bdevs_list": [ 00:36:30.967 { 00:36:30.967 "name": null, 00:36:30.967 "uuid": "1e0b4ec0-7141-48b8-9874-0f5d943baaba", 00:36:30.967 "is_configured": false, 00:36:30.967 "data_offset": 0, 00:36:30.967 "data_size": 63488 00:36:30.967 }, 00:36:30.967 { 00:36:30.967 "name": null, 00:36:30.967 "uuid": "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb", 00:36:30.967 "is_configured": false, 00:36:30.967 "data_offset": 0, 00:36:30.967 "data_size": 63488 00:36:30.967 }, 00:36:30.967 { 00:36:30.967 "name": "BaseBdev3", 00:36:30.967 "uuid": "afa20123-5bba-4ac2-b75c-05dbbda4c3ea", 00:36:30.967 "is_configured": true, 00:36:30.967 "data_offset": 2048, 00:36:30.967 "data_size": 63488 00:36:30.967 }, 00:36:30.967 { 00:36:30.967 "name": "BaseBdev4", 00:36:30.967 "uuid": "849ed2a1-2723-4557-bcba-be25b2d4684f", 00:36:30.967 "is_configured": true, 00:36:30.967 "data_offset": 2048, 00:36:30.967 "data_size": 63488 00:36:30.967 } 00:36:30.967 ] 00:36:30.967 }' 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:30.967 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.535 [2024-12-09 23:18:11.922488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.535 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:31.535 "name": "Existed_Raid", 00:36:31.535 "uuid": "c5b7480e-90b6-446d-a549-4aced6c6e125", 00:36:31.535 "strip_size_kb": 0, 00:36:31.535 "state": "configuring", 00:36:31.535 "raid_level": "raid1", 00:36:31.536 "superblock": true, 00:36:31.536 "num_base_bdevs": 4, 00:36:31.536 "num_base_bdevs_discovered": 3, 00:36:31.536 "num_base_bdevs_operational": 4, 00:36:31.536 "base_bdevs_list": [ 00:36:31.536 { 00:36:31.536 "name": null, 00:36:31.536 "uuid": "1e0b4ec0-7141-48b8-9874-0f5d943baaba", 00:36:31.536 "is_configured": false, 00:36:31.536 "data_offset": 0, 00:36:31.536 "data_size": 63488 00:36:31.536 }, 00:36:31.536 { 00:36:31.536 "name": "BaseBdev2", 00:36:31.536 "uuid": "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb", 00:36:31.536 "is_configured": true, 00:36:31.536 "data_offset": 2048, 00:36:31.536 "data_size": 63488 00:36:31.536 }, 00:36:31.536 { 00:36:31.536 "name": "BaseBdev3", 00:36:31.536 "uuid": "afa20123-5bba-4ac2-b75c-05dbbda4c3ea", 00:36:31.536 "is_configured": true, 00:36:31.536 "data_offset": 2048, 00:36:31.536 "data_size": 63488 00:36:31.536 }, 00:36:31.536 { 00:36:31.536 "name": "BaseBdev4", 00:36:31.536 "uuid": "849ed2a1-2723-4557-bcba-be25b2d4684f", 00:36:31.536 "is_configured": true, 00:36:31.536 "data_offset": 2048, 00:36:31.536 "data_size": 63488 00:36:31.536 } 00:36:31.536 ] 00:36:31.536 }' 00:36:31.536 23:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:31.536 23:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.794 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.794 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:31.795 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.795 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.795 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.795 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:31.795 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.795 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:31.795 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.795 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e0b4ec0-7141-48b8-9874-0f5d943baaba 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.053 [2024-12-09 23:18:12.492644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:32.053 [2024-12-09 23:18:12.492906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:32.053 [2024-12-09 23:18:12.492929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:32.053 [2024-12-09 23:18:12.493212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:36:32.053 NewBaseBdev 00:36:32.053 [2024-12-09 23:18:12.493371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:32.053 [2024-12-09 23:18:12.493382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:36:32.053 [2024-12-09 23:18:12.493536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.053 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.053 [ 00:36:32.053 { 00:36:32.053 "name": "NewBaseBdev", 00:36:32.053 "aliases": [ 00:36:32.053 "1e0b4ec0-7141-48b8-9874-0f5d943baaba" 00:36:32.053 ], 00:36:32.053 "product_name": "Malloc disk", 00:36:32.053 "block_size": 512, 00:36:32.053 "num_blocks": 65536, 00:36:32.053 "uuid": "1e0b4ec0-7141-48b8-9874-0f5d943baaba", 00:36:32.053 "assigned_rate_limits": { 00:36:32.053 "rw_ios_per_sec": 0, 00:36:32.053 "rw_mbytes_per_sec": 0, 00:36:32.053 "r_mbytes_per_sec": 0, 00:36:32.053 "w_mbytes_per_sec": 0 00:36:32.053 }, 00:36:32.053 "claimed": true, 00:36:32.053 "claim_type": "exclusive_write", 00:36:32.053 "zoned": false, 00:36:32.053 "supported_io_types": { 00:36:32.053 "read": true, 00:36:32.053 "write": true, 00:36:32.053 "unmap": true, 00:36:32.053 "flush": true, 00:36:32.053 "reset": true, 00:36:32.053 "nvme_admin": false, 00:36:32.053 "nvme_io": false, 00:36:32.053 "nvme_io_md": false, 00:36:32.053 "write_zeroes": true, 00:36:32.053 "zcopy": true, 00:36:32.053 "get_zone_info": false, 00:36:32.053 "zone_management": false, 00:36:32.053 "zone_append": false, 00:36:32.053 "compare": false, 00:36:32.053 "compare_and_write": false, 00:36:32.053 "abort": true, 00:36:32.053 "seek_hole": false, 00:36:32.053 "seek_data": false, 00:36:32.053 "copy": true, 00:36:32.053 "nvme_iov_md": false 00:36:32.053 }, 00:36:32.053 "memory_domains": [ 00:36:32.053 { 00:36:32.053 "dma_device_id": "system", 00:36:32.053 "dma_device_type": 1 00:36:32.053 }, 00:36:32.053 { 00:36:32.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:32.053 "dma_device_type": 2 00:36:32.053 } 00:36:32.053 ], 00:36:32.053 "driver_specific": {} 00:36:32.053 } 00:36:32.053 ] 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:32.054 "name": "Existed_Raid", 00:36:32.054 "uuid": "c5b7480e-90b6-446d-a549-4aced6c6e125", 00:36:32.054 "strip_size_kb": 0, 00:36:32.054 "state": "online", 00:36:32.054 "raid_level": "raid1", 00:36:32.054 "superblock": true, 00:36:32.054 "num_base_bdevs": 4, 00:36:32.054 "num_base_bdevs_discovered": 4, 00:36:32.054 "num_base_bdevs_operational": 4, 00:36:32.054 "base_bdevs_list": [ 00:36:32.054 { 00:36:32.054 "name": "NewBaseBdev", 00:36:32.054 "uuid": "1e0b4ec0-7141-48b8-9874-0f5d943baaba", 00:36:32.054 "is_configured": true, 00:36:32.054 "data_offset": 2048, 00:36:32.054 "data_size": 63488 00:36:32.054 }, 00:36:32.054 { 00:36:32.054 "name": "BaseBdev2", 00:36:32.054 "uuid": "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb", 00:36:32.054 "is_configured": true, 00:36:32.054 "data_offset": 2048, 00:36:32.054 "data_size": 63488 00:36:32.054 }, 00:36:32.054 { 00:36:32.054 "name": "BaseBdev3", 00:36:32.054 "uuid": "afa20123-5bba-4ac2-b75c-05dbbda4c3ea", 00:36:32.054 "is_configured": true, 00:36:32.054 "data_offset": 2048, 00:36:32.054 "data_size": 63488 00:36:32.054 }, 00:36:32.054 { 00:36:32.054 "name": "BaseBdev4", 00:36:32.054 "uuid": "849ed2a1-2723-4557-bcba-be25b2d4684f", 00:36:32.054 "is_configured": true, 00:36:32.054 "data_offset": 2048, 00:36:32.054 "data_size": 63488 00:36:32.054 } 00:36:32.054 ] 00:36:32.054 }' 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:32.054 23:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.622 [2024-12-09 23:18:13.016354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.622 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:32.622 "name": "Existed_Raid", 00:36:32.622 "aliases": [ 00:36:32.622 "c5b7480e-90b6-446d-a549-4aced6c6e125" 00:36:32.622 ], 00:36:32.622 "product_name": "Raid Volume", 00:36:32.622 "block_size": 512, 00:36:32.622 "num_blocks": 63488, 00:36:32.622 "uuid": "c5b7480e-90b6-446d-a549-4aced6c6e125", 00:36:32.622 "assigned_rate_limits": { 00:36:32.622 "rw_ios_per_sec": 0, 00:36:32.622 "rw_mbytes_per_sec": 0, 00:36:32.622 "r_mbytes_per_sec": 0, 00:36:32.622 "w_mbytes_per_sec": 0 00:36:32.622 }, 00:36:32.622 "claimed": false, 00:36:32.622 "zoned": false, 00:36:32.622 "supported_io_types": { 00:36:32.622 "read": true, 00:36:32.622 "write": true, 00:36:32.622 "unmap": false, 00:36:32.622 "flush": false, 00:36:32.622 "reset": true, 00:36:32.622 "nvme_admin": false, 00:36:32.623 "nvme_io": false, 00:36:32.623 "nvme_io_md": false, 00:36:32.623 "write_zeroes": true, 00:36:32.623 "zcopy": false, 00:36:32.623 "get_zone_info": false, 00:36:32.623 "zone_management": false, 00:36:32.623 "zone_append": false, 00:36:32.623 "compare": false, 00:36:32.623 "compare_and_write": false, 00:36:32.623 "abort": false, 00:36:32.623 "seek_hole": false, 00:36:32.623 "seek_data": false, 00:36:32.623 "copy": false, 00:36:32.623 "nvme_iov_md": false 00:36:32.623 }, 00:36:32.623 "memory_domains": [ 00:36:32.623 { 00:36:32.623 "dma_device_id": "system", 00:36:32.623 "dma_device_type": 1 00:36:32.623 }, 00:36:32.623 { 00:36:32.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:32.623 "dma_device_type": 2 00:36:32.623 }, 00:36:32.623 { 00:36:32.623 "dma_device_id": "system", 00:36:32.623 "dma_device_type": 1 00:36:32.623 }, 00:36:32.623 { 00:36:32.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:32.623 "dma_device_type": 2 00:36:32.623 }, 00:36:32.623 { 00:36:32.623 "dma_device_id": "system", 00:36:32.623 "dma_device_type": 1 00:36:32.623 }, 00:36:32.623 { 00:36:32.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:32.623 "dma_device_type": 2 00:36:32.623 }, 00:36:32.623 { 00:36:32.623 "dma_device_id": "system", 00:36:32.623 "dma_device_type": 1 00:36:32.623 }, 00:36:32.623 { 00:36:32.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:32.623 "dma_device_type": 2 00:36:32.623 } 00:36:32.623 ], 00:36:32.623 "driver_specific": { 00:36:32.623 "raid": { 00:36:32.623 "uuid": "c5b7480e-90b6-446d-a549-4aced6c6e125", 00:36:32.623 "strip_size_kb": 0, 00:36:32.623 "state": "online", 00:36:32.623 "raid_level": "raid1", 00:36:32.623 "superblock": true, 00:36:32.623 "num_base_bdevs": 4, 00:36:32.623 "num_base_bdevs_discovered": 4, 00:36:32.623 "num_base_bdevs_operational": 4, 00:36:32.623 "base_bdevs_list": [ 00:36:32.623 { 00:36:32.623 "name": "NewBaseBdev", 00:36:32.623 "uuid": "1e0b4ec0-7141-48b8-9874-0f5d943baaba", 00:36:32.623 "is_configured": true, 00:36:32.623 "data_offset": 2048, 00:36:32.623 "data_size": 63488 00:36:32.623 }, 00:36:32.623 { 00:36:32.623 "name": "BaseBdev2", 00:36:32.623 "uuid": "31f485b9-6ba6-4d1f-b83a-0e19b1aa6ecb", 00:36:32.623 "is_configured": true, 00:36:32.623 "data_offset": 2048, 00:36:32.623 "data_size": 63488 00:36:32.623 }, 00:36:32.623 { 00:36:32.623 "name": "BaseBdev3", 00:36:32.623 "uuid": "afa20123-5bba-4ac2-b75c-05dbbda4c3ea", 00:36:32.623 "is_configured": true, 00:36:32.623 "data_offset": 2048, 00:36:32.623 "data_size": 63488 00:36:32.623 }, 00:36:32.623 { 00:36:32.623 "name": "BaseBdev4", 00:36:32.623 "uuid": "849ed2a1-2723-4557-bcba-be25b2d4684f", 00:36:32.623 "is_configured": true, 00:36:32.623 "data_offset": 2048, 00:36:32.623 "data_size": 63488 00:36:32.623 } 00:36:32.623 ] 00:36:32.623 } 00:36:32.623 } 00:36:32.623 }' 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:32.623 BaseBdev2 00:36:32.623 BaseBdev3 00:36:32.623 BaseBdev4' 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:32.623 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.881 [2024-12-09 23:18:13.363556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:32.881 [2024-12-09 23:18:13.363702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:32.881 [2024-12-09 23:18:13.363816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:32.881 [2024-12-09 23:18:13.364108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:32.881 [2024-12-09 23:18:13.364124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73714 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73714 ']' 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73714 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73714 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:32.881 killing process with pid 73714 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73714' 00:36:32.881 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73714 00:36:32.881 [2024-12-09 23:18:13.413886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:32.882 23:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73714 00:36:33.455 [2024-12-09 23:18:13.815266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:34.830 23:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:36:34.830 00:36:34.830 real 0m11.717s 00:36:34.830 user 0m18.567s 00:36:34.830 sys 0m2.305s 00:36:34.830 23:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:34.830 23:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.830 ************************************ 00:36:34.830 END TEST raid_state_function_test_sb 00:36:34.830 ************************************ 00:36:34.830 23:18:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:36:34.830 23:18:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:34.830 23:18:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:34.830 23:18:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:34.830 ************************************ 00:36:34.830 START TEST raid_superblock_test 00:36:34.830 ************************************ 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74386 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74386 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74386 ']' 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.830 23:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:34.830 [2024-12-09 23:18:15.246694] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:34.830 [2024-12-09 23:18:15.247117] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74386 ] 00:36:34.830 [2024-12-09 23:18:15.441602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.089 [2024-12-09 23:18:15.568659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.347 [2024-12-09 23:18:15.784227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:35.347 [2024-12-09 23:18:15.784520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.606 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.930 malloc1 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.930 [2024-12-09 23:18:16.271966] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:35.930 [2024-12-09 23:18:16.272165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:35.930 [2024-12-09 23:18:16.272229] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:35.930 [2024-12-09 23:18:16.272245] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:35.930 [2024-12-09 23:18:16.274815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:35.930 [2024-12-09 23:18:16.274858] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:35.930 pt1 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.930 malloc2 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.930 [2024-12-09 23:18:16.330204] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:35.930 [2024-12-09 23:18:16.330428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:35.930 [2024-12-09 23:18:16.330497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:35.930 [2024-12-09 23:18:16.330577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:35.930 [2024-12-09 23:18:16.333143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:35.930 [2024-12-09 23:18:16.333287] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:35.930 pt2 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.930 malloc3 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.930 [2024-12-09 23:18:16.402440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:35.930 [2024-12-09 23:18:16.402617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:35.930 [2024-12-09 23:18:16.402681] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:35.930 [2024-12-09 23:18:16.402761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:35.930 [2024-12-09 23:18:16.405361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:35.930 [2024-12-09 23:18:16.405526] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:35.930 pt3 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:35.930 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.931 malloc4 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.931 [2024-12-09 23:18:16.459210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:35.931 [2024-12-09 23:18:16.459275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:35.931 [2024-12-09 23:18:16.459300] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:36:35.931 [2024-12-09 23:18:16.459313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:35.931 [2024-12-09 23:18:16.461860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:35.931 [2024-12-09 23:18:16.461899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:35.931 pt4 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.931 [2024-12-09 23:18:16.471227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:35.931 [2024-12-09 23:18:16.474004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:35.931 [2024-12-09 23:18:16.474088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:35.931 [2024-12-09 23:18:16.474170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:35.931 [2024-12-09 23:18:16.474646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:36:35.931 [2024-12-09 23:18:16.474794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:35.931 [2024-12-09 23:18:16.475178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:35.931 [2024-12-09 23:18:16.475539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:36:35.931 [2024-12-09 23:18:16.475570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:36:35.931 [2024-12-09 23:18:16.475797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:35.931 "name": "raid_bdev1", 00:36:35.931 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:35.931 "strip_size_kb": 0, 00:36:35.931 "state": "online", 00:36:35.931 "raid_level": "raid1", 00:36:35.931 "superblock": true, 00:36:35.931 "num_base_bdevs": 4, 00:36:35.931 "num_base_bdevs_discovered": 4, 00:36:35.931 "num_base_bdevs_operational": 4, 00:36:35.931 "base_bdevs_list": [ 00:36:35.931 { 00:36:35.931 "name": "pt1", 00:36:35.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:35.931 "is_configured": true, 00:36:35.931 "data_offset": 2048, 00:36:35.931 "data_size": 63488 00:36:35.931 }, 00:36:35.931 { 00:36:35.931 "name": "pt2", 00:36:35.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:35.931 "is_configured": true, 00:36:35.931 "data_offset": 2048, 00:36:35.931 "data_size": 63488 00:36:35.931 }, 00:36:35.931 { 00:36:35.931 "name": "pt3", 00:36:35.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:35.931 "is_configured": true, 00:36:35.931 "data_offset": 2048, 00:36:35.931 "data_size": 63488 00:36:35.931 }, 00:36:35.931 { 00:36:35.931 "name": "pt4", 00:36:35.931 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:35.931 "is_configured": true, 00:36:35.931 "data_offset": 2048, 00:36:35.931 "data_size": 63488 00:36:35.931 } 00:36:35.931 ] 00:36:35.931 }' 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:35.931 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:36.498 [2024-12-09 23:18:16.931482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.498 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:36.498 "name": "raid_bdev1", 00:36:36.498 "aliases": [ 00:36:36.498 "800b4420-e96a-46d3-bec8-69806e09d269" 00:36:36.498 ], 00:36:36.498 "product_name": "Raid Volume", 00:36:36.498 "block_size": 512, 00:36:36.498 "num_blocks": 63488, 00:36:36.498 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:36.498 "assigned_rate_limits": { 00:36:36.498 "rw_ios_per_sec": 0, 00:36:36.498 "rw_mbytes_per_sec": 0, 00:36:36.498 "r_mbytes_per_sec": 0, 00:36:36.498 "w_mbytes_per_sec": 0 00:36:36.498 }, 00:36:36.498 "claimed": false, 00:36:36.498 "zoned": false, 00:36:36.498 "supported_io_types": { 00:36:36.498 "read": true, 00:36:36.498 "write": true, 00:36:36.498 "unmap": false, 00:36:36.498 "flush": false, 00:36:36.498 "reset": true, 00:36:36.498 "nvme_admin": false, 00:36:36.498 "nvme_io": false, 00:36:36.498 "nvme_io_md": false, 00:36:36.498 "write_zeroes": true, 00:36:36.498 "zcopy": false, 00:36:36.498 "get_zone_info": false, 00:36:36.498 "zone_management": false, 00:36:36.498 "zone_append": false, 00:36:36.498 "compare": false, 00:36:36.498 "compare_and_write": false, 00:36:36.498 "abort": false, 00:36:36.498 "seek_hole": false, 00:36:36.498 "seek_data": false, 00:36:36.498 "copy": false, 00:36:36.498 "nvme_iov_md": false 00:36:36.498 }, 00:36:36.498 "memory_domains": [ 00:36:36.498 { 00:36:36.498 "dma_device_id": "system", 00:36:36.498 "dma_device_type": 1 00:36:36.498 }, 00:36:36.498 { 00:36:36.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:36.498 "dma_device_type": 2 00:36:36.498 }, 00:36:36.498 { 00:36:36.498 "dma_device_id": "system", 00:36:36.498 "dma_device_type": 1 00:36:36.498 }, 00:36:36.498 { 00:36:36.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:36.498 "dma_device_type": 2 00:36:36.498 }, 00:36:36.498 { 00:36:36.498 "dma_device_id": "system", 00:36:36.498 "dma_device_type": 1 00:36:36.498 }, 00:36:36.498 { 00:36:36.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:36.498 "dma_device_type": 2 00:36:36.498 }, 00:36:36.498 { 00:36:36.498 "dma_device_id": "system", 00:36:36.498 "dma_device_type": 1 00:36:36.498 }, 00:36:36.498 { 00:36:36.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:36.498 "dma_device_type": 2 00:36:36.498 } 00:36:36.498 ], 00:36:36.498 "driver_specific": { 00:36:36.498 "raid": { 00:36:36.498 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:36.498 "strip_size_kb": 0, 00:36:36.499 "state": "online", 00:36:36.499 "raid_level": "raid1", 00:36:36.499 "superblock": true, 00:36:36.499 "num_base_bdevs": 4, 00:36:36.499 "num_base_bdevs_discovered": 4, 00:36:36.499 "num_base_bdevs_operational": 4, 00:36:36.499 "base_bdevs_list": [ 00:36:36.499 { 00:36:36.499 "name": "pt1", 00:36:36.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:36.499 "is_configured": true, 00:36:36.499 "data_offset": 2048, 00:36:36.499 "data_size": 63488 00:36:36.499 }, 00:36:36.499 { 00:36:36.499 "name": "pt2", 00:36:36.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:36.499 "is_configured": true, 00:36:36.499 "data_offset": 2048, 00:36:36.499 "data_size": 63488 00:36:36.499 }, 00:36:36.499 { 00:36:36.499 "name": "pt3", 00:36:36.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:36.499 "is_configured": true, 00:36:36.499 "data_offset": 2048, 00:36:36.499 "data_size": 63488 00:36:36.499 }, 00:36:36.499 { 00:36:36.499 "name": "pt4", 00:36:36.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:36.499 "is_configured": true, 00:36:36.499 "data_offset": 2048, 00:36:36.499 "data_size": 63488 00:36:36.499 } 00:36:36.499 ] 00:36:36.499 } 00:36:36.499 } 00:36:36.499 }' 00:36:36.499 23:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:36.499 pt2 00:36:36.499 pt3 00:36:36.499 pt4' 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:36.499 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.758 [2024-12-09 23:18:17.258967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=800b4420-e96a-46d3-bec8-69806e09d269 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 800b4420-e96a-46d3-bec8-69806e09d269 ']' 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.758 [2024-12-09 23:18:17.302594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:36.758 [2024-12-09 23:18:17.302626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:36.758 [2024-12-09 23:18:17.302718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:36.758 [2024-12-09 23:18:17.302806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:36.758 [2024-12-09 23:18:17.302824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.758 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.759 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.018 [2024-12-09 23:18:17.474496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:37.018 [2024-12-09 23:18:17.476727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:37.018 [2024-12-09 23:18:17.476782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:36:37.018 [2024-12-09 23:18:17.476821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:36:37.018 [2024-12-09 23:18:17.476875] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:37.018 [2024-12-09 23:18:17.476940] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:37.018 [2024-12-09 23:18:17.476963] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:36:37.018 [2024-12-09 23:18:17.476986] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:36:37.018 [2024-12-09 23:18:17.477004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:37.018 [2024-12-09 23:18:17.477018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:36:37.018 request: 00:36:37.018 { 00:36:37.018 "name": "raid_bdev1", 00:36:37.018 "raid_level": "raid1", 00:36:37.018 "base_bdevs": [ 00:36:37.018 "malloc1", 00:36:37.018 "malloc2", 00:36:37.018 "malloc3", 00:36:37.018 "malloc4" 00:36:37.018 ], 00:36:37.018 "superblock": false, 00:36:37.018 "method": "bdev_raid_create", 00:36:37.018 "req_id": 1 00:36:37.018 } 00:36:37.018 Got JSON-RPC error response 00:36:37.018 response: 00:36:37.018 { 00:36:37.018 "code": -17, 00:36:37.018 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:37.018 } 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.018 [2024-12-09 23:18:17.542442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:37.018 [2024-12-09 23:18:17.542513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:37.018 [2024-12-09 23:18:17.542552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:37.018 [2024-12-09 23:18:17.542567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:37.018 [2024-12-09 23:18:17.545175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:37.018 [2024-12-09 23:18:17.545227] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:37.018 [2024-12-09 23:18:17.545323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:37.018 [2024-12-09 23:18:17.545435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:37.018 pt1 00:36:37.018 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:37.019 "name": "raid_bdev1", 00:36:37.019 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:37.019 "strip_size_kb": 0, 00:36:37.019 "state": "configuring", 00:36:37.019 "raid_level": "raid1", 00:36:37.019 "superblock": true, 00:36:37.019 "num_base_bdevs": 4, 00:36:37.019 "num_base_bdevs_discovered": 1, 00:36:37.019 "num_base_bdevs_operational": 4, 00:36:37.019 "base_bdevs_list": [ 00:36:37.019 { 00:36:37.019 "name": "pt1", 00:36:37.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:37.019 "is_configured": true, 00:36:37.019 "data_offset": 2048, 00:36:37.019 "data_size": 63488 00:36:37.019 }, 00:36:37.019 { 00:36:37.019 "name": null, 00:36:37.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:37.019 "is_configured": false, 00:36:37.019 "data_offset": 2048, 00:36:37.019 "data_size": 63488 00:36:37.019 }, 00:36:37.019 { 00:36:37.019 "name": null, 00:36:37.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:37.019 "is_configured": false, 00:36:37.019 "data_offset": 2048, 00:36:37.019 "data_size": 63488 00:36:37.019 }, 00:36:37.019 { 00:36:37.019 "name": null, 00:36:37.019 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:37.019 "is_configured": false, 00:36:37.019 "data_offset": 2048, 00:36:37.019 "data_size": 63488 00:36:37.019 } 00:36:37.019 ] 00:36:37.019 }' 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:37.019 23:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.586 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:36:37.586 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:37.586 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.586 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.587 [2024-12-09 23:18:18.014471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:37.587 [2024-12-09 23:18:18.014699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:37.587 [2024-12-09 23:18:18.014735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:36:37.587 [2024-12-09 23:18:18.014751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:37.587 [2024-12-09 23:18:18.015222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:37.587 [2024-12-09 23:18:18.015252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:37.587 [2024-12-09 23:18:18.015344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:37.587 [2024-12-09 23:18:18.015373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:37.587 pt2 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.587 [2024-12-09 23:18:18.026471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:37.587 "name": "raid_bdev1", 00:36:37.587 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:37.587 "strip_size_kb": 0, 00:36:37.587 "state": "configuring", 00:36:37.587 "raid_level": "raid1", 00:36:37.587 "superblock": true, 00:36:37.587 "num_base_bdevs": 4, 00:36:37.587 "num_base_bdevs_discovered": 1, 00:36:37.587 "num_base_bdevs_operational": 4, 00:36:37.587 "base_bdevs_list": [ 00:36:37.587 { 00:36:37.587 "name": "pt1", 00:36:37.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:37.587 "is_configured": true, 00:36:37.587 "data_offset": 2048, 00:36:37.587 "data_size": 63488 00:36:37.587 }, 00:36:37.587 { 00:36:37.587 "name": null, 00:36:37.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:37.587 "is_configured": false, 00:36:37.587 "data_offset": 0, 00:36:37.587 "data_size": 63488 00:36:37.587 }, 00:36:37.587 { 00:36:37.587 "name": null, 00:36:37.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:37.587 "is_configured": false, 00:36:37.587 "data_offset": 2048, 00:36:37.587 "data_size": 63488 00:36:37.587 }, 00:36:37.587 { 00:36:37.587 "name": null, 00:36:37.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:37.587 "is_configured": false, 00:36:37.587 "data_offset": 2048, 00:36:37.587 "data_size": 63488 00:36:37.587 } 00:36:37.587 ] 00:36:37.587 }' 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:37.587 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.154 [2024-12-09 23:18:18.498473] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:38.154 [2024-12-09 23:18:18.498543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:38.154 [2024-12-09 23:18:18.498569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:36:38.154 [2024-12-09 23:18:18.498581] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:38.154 [2024-12-09 23:18:18.499054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:38.154 [2024-12-09 23:18:18.499075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:38.154 [2024-12-09 23:18:18.499165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:38.154 [2024-12-09 23:18:18.499189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:38.154 pt2 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.154 [2024-12-09 23:18:18.510459] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:38.154 [2024-12-09 23:18:18.510516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:38.154 [2024-12-09 23:18:18.510539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:36:38.154 [2024-12-09 23:18:18.510551] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:38.154 [2024-12-09 23:18:18.510994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:38.154 [2024-12-09 23:18:18.511013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:38.154 [2024-12-09 23:18:18.511091] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:36:38.154 [2024-12-09 23:18:18.511113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:38.154 pt3 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.154 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.154 [2024-12-09 23:18:18.522402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:38.154 [2024-12-09 23:18:18.522461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:38.154 [2024-12-09 23:18:18.522501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:36:38.154 [2024-12-09 23:18:18.522512] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:38.154 [2024-12-09 23:18:18.522938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:38.154 [2024-12-09 23:18:18.522957] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:38.154 [2024-12-09 23:18:18.523028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:36:38.154 [2024-12-09 23:18:18.523055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:38.154 [2024-12-09 23:18:18.523192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:38.154 [2024-12-09 23:18:18.523202] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:38.154 [2024-12-09 23:18:18.523482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:38.154 [2024-12-09 23:18:18.523651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:38.154 [2024-12-09 23:18:18.523666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:36:38.155 [2024-12-09 23:18:18.523822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:38.155 pt4 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:38.155 "name": "raid_bdev1", 00:36:38.155 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:38.155 "strip_size_kb": 0, 00:36:38.155 "state": "online", 00:36:38.155 "raid_level": "raid1", 00:36:38.155 "superblock": true, 00:36:38.155 "num_base_bdevs": 4, 00:36:38.155 "num_base_bdevs_discovered": 4, 00:36:38.155 "num_base_bdevs_operational": 4, 00:36:38.155 "base_bdevs_list": [ 00:36:38.155 { 00:36:38.155 "name": "pt1", 00:36:38.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:38.155 "is_configured": true, 00:36:38.155 "data_offset": 2048, 00:36:38.155 "data_size": 63488 00:36:38.155 }, 00:36:38.155 { 00:36:38.155 "name": "pt2", 00:36:38.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:38.155 "is_configured": true, 00:36:38.155 "data_offset": 2048, 00:36:38.155 "data_size": 63488 00:36:38.155 }, 00:36:38.155 { 00:36:38.155 "name": "pt3", 00:36:38.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:38.155 "is_configured": true, 00:36:38.155 "data_offset": 2048, 00:36:38.155 "data_size": 63488 00:36:38.155 }, 00:36:38.155 { 00:36:38.155 "name": "pt4", 00:36:38.155 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:38.155 "is_configured": true, 00:36:38.155 "data_offset": 2048, 00:36:38.155 "data_size": 63488 00:36:38.155 } 00:36:38.155 ] 00:36:38.155 }' 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:38.155 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.413 [2024-12-09 23:18:18.958825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:38.413 23:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.414 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:38.414 "name": "raid_bdev1", 00:36:38.414 "aliases": [ 00:36:38.414 "800b4420-e96a-46d3-bec8-69806e09d269" 00:36:38.414 ], 00:36:38.414 "product_name": "Raid Volume", 00:36:38.414 "block_size": 512, 00:36:38.414 "num_blocks": 63488, 00:36:38.414 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:38.414 "assigned_rate_limits": { 00:36:38.414 "rw_ios_per_sec": 0, 00:36:38.414 "rw_mbytes_per_sec": 0, 00:36:38.414 "r_mbytes_per_sec": 0, 00:36:38.414 "w_mbytes_per_sec": 0 00:36:38.414 }, 00:36:38.414 "claimed": false, 00:36:38.414 "zoned": false, 00:36:38.414 "supported_io_types": { 00:36:38.414 "read": true, 00:36:38.414 "write": true, 00:36:38.414 "unmap": false, 00:36:38.414 "flush": false, 00:36:38.414 "reset": true, 00:36:38.414 "nvme_admin": false, 00:36:38.414 "nvme_io": false, 00:36:38.414 "nvme_io_md": false, 00:36:38.414 "write_zeroes": true, 00:36:38.414 "zcopy": false, 00:36:38.414 "get_zone_info": false, 00:36:38.414 "zone_management": false, 00:36:38.414 "zone_append": false, 00:36:38.414 "compare": false, 00:36:38.414 "compare_and_write": false, 00:36:38.414 "abort": false, 00:36:38.414 "seek_hole": false, 00:36:38.414 "seek_data": false, 00:36:38.414 "copy": false, 00:36:38.414 "nvme_iov_md": false 00:36:38.414 }, 00:36:38.414 "memory_domains": [ 00:36:38.414 { 00:36:38.414 "dma_device_id": "system", 00:36:38.414 "dma_device_type": 1 00:36:38.414 }, 00:36:38.414 { 00:36:38.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:38.414 "dma_device_type": 2 00:36:38.414 }, 00:36:38.414 { 00:36:38.414 "dma_device_id": "system", 00:36:38.414 "dma_device_type": 1 00:36:38.414 }, 00:36:38.414 { 00:36:38.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:38.414 "dma_device_type": 2 00:36:38.414 }, 00:36:38.414 { 00:36:38.414 "dma_device_id": "system", 00:36:38.414 "dma_device_type": 1 00:36:38.414 }, 00:36:38.414 { 00:36:38.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:38.414 "dma_device_type": 2 00:36:38.414 }, 00:36:38.414 { 00:36:38.414 "dma_device_id": "system", 00:36:38.414 "dma_device_type": 1 00:36:38.414 }, 00:36:38.414 { 00:36:38.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:38.414 "dma_device_type": 2 00:36:38.414 } 00:36:38.414 ], 00:36:38.414 "driver_specific": { 00:36:38.414 "raid": { 00:36:38.414 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:38.414 "strip_size_kb": 0, 00:36:38.414 "state": "online", 00:36:38.414 "raid_level": "raid1", 00:36:38.414 "superblock": true, 00:36:38.414 "num_base_bdevs": 4, 00:36:38.414 "num_base_bdevs_discovered": 4, 00:36:38.414 "num_base_bdevs_operational": 4, 00:36:38.414 "base_bdevs_list": [ 00:36:38.414 { 00:36:38.414 "name": "pt1", 00:36:38.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:38.414 "is_configured": true, 00:36:38.414 "data_offset": 2048, 00:36:38.414 "data_size": 63488 00:36:38.414 }, 00:36:38.414 { 00:36:38.414 "name": "pt2", 00:36:38.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:38.414 "is_configured": true, 00:36:38.414 "data_offset": 2048, 00:36:38.414 "data_size": 63488 00:36:38.414 }, 00:36:38.414 { 00:36:38.414 "name": "pt3", 00:36:38.414 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:38.414 "is_configured": true, 00:36:38.414 "data_offset": 2048, 00:36:38.414 "data_size": 63488 00:36:38.414 }, 00:36:38.414 { 00:36:38.414 "name": "pt4", 00:36:38.414 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:38.414 "is_configured": true, 00:36:38.414 "data_offset": 2048, 00:36:38.414 "data_size": 63488 00:36:38.414 } 00:36:38.414 ] 00:36:38.414 } 00:36:38.414 } 00:36:38.414 }' 00:36:38.414 23:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:38.414 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:38.414 pt2 00:36:38.414 pt3 00:36:38.414 pt4' 00:36:38.414 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.672 [2024-12-09 23:18:19.258764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 800b4420-e96a-46d3-bec8-69806e09d269 '!=' 800b4420-e96a-46d3-bec8-69806e09d269 ']' 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.672 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.931 [2024-12-09 23:18:19.306510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:38.931 "name": "raid_bdev1", 00:36:38.931 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:38.931 "strip_size_kb": 0, 00:36:38.931 "state": "online", 00:36:38.931 "raid_level": "raid1", 00:36:38.931 "superblock": true, 00:36:38.931 "num_base_bdevs": 4, 00:36:38.931 "num_base_bdevs_discovered": 3, 00:36:38.931 "num_base_bdevs_operational": 3, 00:36:38.931 "base_bdevs_list": [ 00:36:38.931 { 00:36:38.931 "name": null, 00:36:38.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:38.931 "is_configured": false, 00:36:38.931 "data_offset": 0, 00:36:38.931 "data_size": 63488 00:36:38.931 }, 00:36:38.931 { 00:36:38.931 "name": "pt2", 00:36:38.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:38.931 "is_configured": true, 00:36:38.931 "data_offset": 2048, 00:36:38.931 "data_size": 63488 00:36:38.931 }, 00:36:38.931 { 00:36:38.931 "name": "pt3", 00:36:38.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:38.931 "is_configured": true, 00:36:38.931 "data_offset": 2048, 00:36:38.931 "data_size": 63488 00:36:38.931 }, 00:36:38.931 { 00:36:38.931 "name": "pt4", 00:36:38.931 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:38.931 "is_configured": true, 00:36:38.931 "data_offset": 2048, 00:36:38.931 "data_size": 63488 00:36:38.931 } 00:36:38.931 ] 00:36:38.931 }' 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:38.931 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.200 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:39.200 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.201 [2024-12-09 23:18:19.750457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:39.201 [2024-12-09 23:18:19.750634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:39.201 [2024-12-09 23:18:19.750815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:39.201 [2024-12-09 23:18:19.750989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:39.201 [2024-12-09 23:18:19.751083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.201 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.468 [2024-12-09 23:18:19.846439] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:39.468 [2024-12-09 23:18:19.846619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:39.468 [2024-12-09 23:18:19.846681] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:36:39.468 [2024-12-09 23:18:19.846779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:39.468 [2024-12-09 23:18:19.849535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:39.468 [2024-12-09 23:18:19.849683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:39.468 [2024-12-09 23:18:19.849859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:39.468 [2024-12-09 23:18:19.849950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:39.468 pt2 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:39.468 "name": "raid_bdev1", 00:36:39.468 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:39.468 "strip_size_kb": 0, 00:36:39.468 "state": "configuring", 00:36:39.468 "raid_level": "raid1", 00:36:39.468 "superblock": true, 00:36:39.468 "num_base_bdevs": 4, 00:36:39.468 "num_base_bdevs_discovered": 1, 00:36:39.468 "num_base_bdevs_operational": 3, 00:36:39.468 "base_bdevs_list": [ 00:36:39.468 { 00:36:39.468 "name": null, 00:36:39.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.468 "is_configured": false, 00:36:39.468 "data_offset": 2048, 00:36:39.468 "data_size": 63488 00:36:39.468 }, 00:36:39.468 { 00:36:39.468 "name": "pt2", 00:36:39.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:39.468 "is_configured": true, 00:36:39.468 "data_offset": 2048, 00:36:39.468 "data_size": 63488 00:36:39.468 }, 00:36:39.468 { 00:36:39.468 "name": null, 00:36:39.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:39.468 "is_configured": false, 00:36:39.468 "data_offset": 2048, 00:36:39.468 "data_size": 63488 00:36:39.468 }, 00:36:39.468 { 00:36:39.468 "name": null, 00:36:39.468 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:39.468 "is_configured": false, 00:36:39.468 "data_offset": 2048, 00:36:39.468 "data_size": 63488 00:36:39.468 } 00:36:39.468 ] 00:36:39.468 }' 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:39.468 23:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.726 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:36:39.726 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:36:39.726 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:39.726 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.726 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.727 [2024-12-09 23:18:20.294483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:39.727 [2024-12-09 23:18:20.294556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:39.727 [2024-12-09 23:18:20.294582] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:36:39.727 [2024-12-09 23:18:20.294596] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:39.727 [2024-12-09 23:18:20.295079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:39.727 [2024-12-09 23:18:20.295107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:39.727 [2024-12-09 23:18:20.295202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:36:39.727 [2024-12-09 23:18:20.295228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:39.727 pt3 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:39.727 "name": "raid_bdev1", 00:36:39.727 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:39.727 "strip_size_kb": 0, 00:36:39.727 "state": "configuring", 00:36:39.727 "raid_level": "raid1", 00:36:39.727 "superblock": true, 00:36:39.727 "num_base_bdevs": 4, 00:36:39.727 "num_base_bdevs_discovered": 2, 00:36:39.727 "num_base_bdevs_operational": 3, 00:36:39.727 "base_bdevs_list": [ 00:36:39.727 { 00:36:39.727 "name": null, 00:36:39.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.727 "is_configured": false, 00:36:39.727 "data_offset": 2048, 00:36:39.727 "data_size": 63488 00:36:39.727 }, 00:36:39.727 { 00:36:39.727 "name": "pt2", 00:36:39.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:39.727 "is_configured": true, 00:36:39.727 "data_offset": 2048, 00:36:39.727 "data_size": 63488 00:36:39.727 }, 00:36:39.727 { 00:36:39.727 "name": "pt3", 00:36:39.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:39.727 "is_configured": true, 00:36:39.727 "data_offset": 2048, 00:36:39.727 "data_size": 63488 00:36:39.727 }, 00:36:39.727 { 00:36:39.727 "name": null, 00:36:39.727 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:39.727 "is_configured": false, 00:36:39.727 "data_offset": 2048, 00:36:39.727 "data_size": 63488 00:36:39.727 } 00:36:39.727 ] 00:36:39.727 }' 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:39.727 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.294 [2024-12-09 23:18:20.734481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:40.294 [2024-12-09 23:18:20.734697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:40.294 [2024-12-09 23:18:20.734764] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:36:40.294 [2024-12-09 23:18:20.734852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:40.294 [2024-12-09 23:18:20.735358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:40.294 [2024-12-09 23:18:20.735520] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:40.294 [2024-12-09 23:18:20.735717] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:36:40.294 [2024-12-09 23:18:20.735830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:40.294 [2024-12-09 23:18:20.736089] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:40.294 [2024-12-09 23:18:20.736186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:40.294 [2024-12-09 23:18:20.736518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:36:40.294 [2024-12-09 23:18:20.736792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:40.294 [2024-12-09 23:18:20.736903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:36:40.294 [2024-12-09 23:18:20.737168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:40.294 pt4 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:40.294 "name": "raid_bdev1", 00:36:40.294 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:40.294 "strip_size_kb": 0, 00:36:40.294 "state": "online", 00:36:40.294 "raid_level": "raid1", 00:36:40.294 "superblock": true, 00:36:40.294 "num_base_bdevs": 4, 00:36:40.294 "num_base_bdevs_discovered": 3, 00:36:40.294 "num_base_bdevs_operational": 3, 00:36:40.294 "base_bdevs_list": [ 00:36:40.294 { 00:36:40.294 "name": null, 00:36:40.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:40.294 "is_configured": false, 00:36:40.294 "data_offset": 2048, 00:36:40.294 "data_size": 63488 00:36:40.294 }, 00:36:40.294 { 00:36:40.294 "name": "pt2", 00:36:40.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:40.294 "is_configured": true, 00:36:40.294 "data_offset": 2048, 00:36:40.294 "data_size": 63488 00:36:40.294 }, 00:36:40.294 { 00:36:40.294 "name": "pt3", 00:36:40.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:40.294 "is_configured": true, 00:36:40.294 "data_offset": 2048, 00:36:40.294 "data_size": 63488 00:36:40.294 }, 00:36:40.294 { 00:36:40.294 "name": "pt4", 00:36:40.294 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:40.294 "is_configured": true, 00:36:40.294 "data_offset": 2048, 00:36:40.294 "data_size": 63488 00:36:40.294 } 00:36:40.294 ] 00:36:40.294 }' 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:40.294 23:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.863 [2024-12-09 23:18:21.206437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:40.863 [2024-12-09 23:18:21.206608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:40.863 [2024-12-09 23:18:21.206722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:40.863 [2024-12-09 23:18:21.206821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:40.863 [2024-12-09 23:18:21.206841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.863 [2024-12-09 23:18:21.282438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:40.863 [2024-12-09 23:18:21.282511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:40.863 [2024-12-09 23:18:21.282538] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:36:40.863 [2024-12-09 23:18:21.282556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:40.863 [2024-12-09 23:18:21.285162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:40.863 [2024-12-09 23:18:21.285211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:40.863 [2024-12-09 23:18:21.285303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:40.863 [2024-12-09 23:18:21.285353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:40.863 [2024-12-09 23:18:21.285542] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:40.863 [2024-12-09 23:18:21.285612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:40.863 [2024-12-09 23:18:21.285633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:36:40.863 [2024-12-09 23:18:21.285712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:40.863 [2024-12-09 23:18:21.285816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:40.863 pt1 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:40.863 "name": "raid_bdev1", 00:36:40.863 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:40.863 "strip_size_kb": 0, 00:36:40.863 "state": "configuring", 00:36:40.863 "raid_level": "raid1", 00:36:40.863 "superblock": true, 00:36:40.863 "num_base_bdevs": 4, 00:36:40.863 "num_base_bdevs_discovered": 2, 00:36:40.863 "num_base_bdevs_operational": 3, 00:36:40.863 "base_bdevs_list": [ 00:36:40.863 { 00:36:40.863 "name": null, 00:36:40.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:40.863 "is_configured": false, 00:36:40.863 "data_offset": 2048, 00:36:40.863 "data_size": 63488 00:36:40.863 }, 00:36:40.863 { 00:36:40.863 "name": "pt2", 00:36:40.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:40.863 "is_configured": true, 00:36:40.863 "data_offset": 2048, 00:36:40.863 "data_size": 63488 00:36:40.863 }, 00:36:40.863 { 00:36:40.863 "name": "pt3", 00:36:40.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:40.863 "is_configured": true, 00:36:40.863 "data_offset": 2048, 00:36:40.863 "data_size": 63488 00:36:40.863 }, 00:36:40.863 { 00:36:40.863 "name": null, 00:36:40.863 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:40.863 "is_configured": false, 00:36:40.863 "data_offset": 2048, 00:36:40.863 "data_size": 63488 00:36:40.863 } 00:36:40.863 ] 00:36:40.863 }' 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:40.863 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.121 [2024-12-09 23:18:21.742450] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:41.121 [2024-12-09 23:18:21.742516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:41.121 [2024-12-09 23:18:21.742561] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:36:41.121 [2024-12-09 23:18:21.742574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:41.121 [2024-12-09 23:18:21.743050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:41.121 [2024-12-09 23:18:21.743072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:41.121 [2024-12-09 23:18:21.743163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:36:41.121 [2024-12-09 23:18:21.743189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:41.121 [2024-12-09 23:18:21.743322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:36:41.121 [2024-12-09 23:18:21.743333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:41.121 [2024-12-09 23:18:21.743790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:36:41.121 [2024-12-09 23:18:21.744076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:36:41.121 [2024-12-09 23:18:21.744194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:36:41.121 [2024-12-09 23:18:21.744483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:41.121 pt4 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.121 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.380 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.380 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:41.380 "name": "raid_bdev1", 00:36:41.380 "uuid": "800b4420-e96a-46d3-bec8-69806e09d269", 00:36:41.380 "strip_size_kb": 0, 00:36:41.380 "state": "online", 00:36:41.380 "raid_level": "raid1", 00:36:41.380 "superblock": true, 00:36:41.380 "num_base_bdevs": 4, 00:36:41.380 "num_base_bdevs_discovered": 3, 00:36:41.380 "num_base_bdevs_operational": 3, 00:36:41.380 "base_bdevs_list": [ 00:36:41.380 { 00:36:41.380 "name": null, 00:36:41.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:41.380 "is_configured": false, 00:36:41.380 "data_offset": 2048, 00:36:41.380 "data_size": 63488 00:36:41.380 }, 00:36:41.380 { 00:36:41.380 "name": "pt2", 00:36:41.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:41.380 "is_configured": true, 00:36:41.380 "data_offset": 2048, 00:36:41.380 "data_size": 63488 00:36:41.380 }, 00:36:41.380 { 00:36:41.380 "name": "pt3", 00:36:41.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:41.380 "is_configured": true, 00:36:41.380 "data_offset": 2048, 00:36:41.380 "data_size": 63488 00:36:41.380 }, 00:36:41.380 { 00:36:41.380 "name": "pt4", 00:36:41.380 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:41.380 "is_configured": true, 00:36:41.380 "data_offset": 2048, 00:36:41.380 "data_size": 63488 00:36:41.380 } 00:36:41.380 ] 00:36:41.380 }' 00:36:41.380 23:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:41.380 23:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.638 23:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:36:41.638 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.638 23:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:41.638 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.638 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.638 23:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:36:41.638 23:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:36:41.638 23:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:41.638 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.638 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:41.638 [2024-12-09 23:18:22.246673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 800b4420-e96a-46d3-bec8-69806e09d269 '!=' 800b4420-e96a-46d3-bec8-69806e09d269 ']' 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74386 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74386 ']' 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74386 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74386 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:41.896 killing process with pid 74386 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74386' 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74386 00:36:41.896 [2024-12-09 23:18:22.341614] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:41.896 [2024-12-09 23:18:22.341735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:41.896 23:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74386 00:36:41.896 [2024-12-09 23:18:22.341815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:41.896 [2024-12-09 23:18:22.341832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:36:42.155 [2024-12-09 23:18:22.759944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:43.536 23:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:36:43.536 00:36:43.536 real 0m8.842s 00:36:43.536 user 0m13.959s 00:36:43.536 sys 0m1.754s 00:36:43.536 23:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:43.536 23:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.536 ************************************ 00:36:43.536 END TEST raid_superblock_test 00:36:43.536 ************************************ 00:36:43.536 23:18:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:36:43.536 23:18:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:43.536 23:18:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:43.536 23:18:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:43.536 ************************************ 00:36:43.536 START TEST raid_read_error_test 00:36:43.536 ************************************ 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:43.536 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nlJJCHSAKH 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74874 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74874 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74874 ']' 00:36:43.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:43.537 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.537 [2024-12-09 23:18:24.145782] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:43.537 [2024-12-09 23:18:24.145922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74874 ] 00:36:43.796 [2024-12-09 23:18:24.331704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.053 [2024-12-09 23:18:24.449559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.053 [2024-12-09 23:18:24.658010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:44.053 [2024-12-09 23:18:24.658244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:44.637 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:44.637 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:36:44.637 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:44.637 23:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:44.637 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.637 23:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.637 BaseBdev1_malloc 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.637 true 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.637 [2024-12-09 23:18:25.054649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:36:44.637 [2024-12-09 23:18:25.054708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:44.637 [2024-12-09 23:18:25.054732] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:44.637 [2024-12-09 23:18:25.054746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:44.637 [2024-12-09 23:18:25.057099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:44.637 [2024-12-09 23:18:25.057271] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:44.637 BaseBdev1 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.637 BaseBdev2_malloc 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.637 true 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.637 [2024-12-09 23:18:25.117953] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:36:44.637 [2024-12-09 23:18:25.118011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:44.637 [2024-12-09 23:18:25.118030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:44.637 [2024-12-09 23:18:25.118043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:44.637 [2024-12-09 23:18:25.120410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:44.637 [2024-12-09 23:18:25.120451] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:44.637 BaseBdev2 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.637 BaseBdev3_malloc 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.637 true 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.637 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.637 [2024-12-09 23:18:25.187320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:36:44.637 [2024-12-09 23:18:25.187380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:44.637 [2024-12-09 23:18:25.187416] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:44.637 [2024-12-09 23:18:25.187431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:44.638 [2024-12-09 23:18:25.189895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:44.638 [2024-12-09 23:18:25.189941] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:44.638 BaseBdev3 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.638 BaseBdev4_malloc 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.638 true 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.638 [2024-12-09 23:18:25.249576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:36:44.638 [2024-12-09 23:18:25.249637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:44.638 [2024-12-09 23:18:25.249658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:44.638 [2024-12-09 23:18:25.249672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:44.638 [2024-12-09 23:18:25.252056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:44.638 [2024-12-09 23:18:25.252104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:36:44.638 BaseBdev4 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.638 [2024-12-09 23:18:25.257615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:44.638 [2024-12-09 23:18:25.259695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:44.638 [2024-12-09 23:18:25.259771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:44.638 [2024-12-09 23:18:25.259834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:44.638 [2024-12-09 23:18:25.260051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:36:44.638 [2024-12-09 23:18:25.260066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:44.638 [2024-12-09 23:18:25.260314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:36:44.638 [2024-12-09 23:18:25.260493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:36:44.638 [2024-12-09 23:18:25.260510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:36:44.638 [2024-12-09 23:18:25.260673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.638 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.902 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.902 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:44.902 "name": "raid_bdev1", 00:36:44.902 "uuid": "0d9ef26f-c2fb-4a5c-b06d-f92dab3e3322", 00:36:44.902 "strip_size_kb": 0, 00:36:44.902 "state": "online", 00:36:44.902 "raid_level": "raid1", 00:36:44.902 "superblock": true, 00:36:44.902 "num_base_bdevs": 4, 00:36:44.902 "num_base_bdevs_discovered": 4, 00:36:44.902 "num_base_bdevs_operational": 4, 00:36:44.902 "base_bdevs_list": [ 00:36:44.902 { 00:36:44.902 "name": "BaseBdev1", 00:36:44.902 "uuid": "4b30311c-128a-596f-a08c-6f85009d10a9", 00:36:44.902 "is_configured": true, 00:36:44.902 "data_offset": 2048, 00:36:44.902 "data_size": 63488 00:36:44.902 }, 00:36:44.902 { 00:36:44.902 "name": "BaseBdev2", 00:36:44.902 "uuid": "4503417d-bbbe-5c33-a9c5-13b9ccd26ea7", 00:36:44.902 "is_configured": true, 00:36:44.902 "data_offset": 2048, 00:36:44.902 "data_size": 63488 00:36:44.902 }, 00:36:44.902 { 00:36:44.902 "name": "BaseBdev3", 00:36:44.902 "uuid": "3a0514d4-a3aa-5455-a0bd-219461b93084", 00:36:44.902 "is_configured": true, 00:36:44.902 "data_offset": 2048, 00:36:44.902 "data_size": 63488 00:36:44.902 }, 00:36:44.902 { 00:36:44.902 "name": "BaseBdev4", 00:36:44.902 "uuid": "7e05d289-d5d8-5ecb-9542-ee43c501aa88", 00:36:44.902 "is_configured": true, 00:36:44.902 "data_offset": 2048, 00:36:44.902 "data_size": 63488 00:36:44.902 } 00:36:44.902 ] 00:36:44.902 }' 00:36:44.902 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:44.902 23:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.160 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:36:45.160 23:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:45.160 [2024-12-09 23:18:25.782178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:46.119 23:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.378 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:46.378 "name": "raid_bdev1", 00:36:46.378 "uuid": "0d9ef26f-c2fb-4a5c-b06d-f92dab3e3322", 00:36:46.378 "strip_size_kb": 0, 00:36:46.378 "state": "online", 00:36:46.378 "raid_level": "raid1", 00:36:46.378 "superblock": true, 00:36:46.378 "num_base_bdevs": 4, 00:36:46.378 "num_base_bdevs_discovered": 4, 00:36:46.378 "num_base_bdevs_operational": 4, 00:36:46.378 "base_bdevs_list": [ 00:36:46.378 { 00:36:46.378 "name": "BaseBdev1", 00:36:46.378 "uuid": "4b30311c-128a-596f-a08c-6f85009d10a9", 00:36:46.378 "is_configured": true, 00:36:46.378 "data_offset": 2048, 00:36:46.378 "data_size": 63488 00:36:46.378 }, 00:36:46.378 { 00:36:46.378 "name": "BaseBdev2", 00:36:46.378 "uuid": "4503417d-bbbe-5c33-a9c5-13b9ccd26ea7", 00:36:46.378 "is_configured": true, 00:36:46.378 "data_offset": 2048, 00:36:46.378 "data_size": 63488 00:36:46.378 }, 00:36:46.378 { 00:36:46.378 "name": "BaseBdev3", 00:36:46.378 "uuid": "3a0514d4-a3aa-5455-a0bd-219461b93084", 00:36:46.378 "is_configured": true, 00:36:46.378 "data_offset": 2048, 00:36:46.378 "data_size": 63488 00:36:46.378 }, 00:36:46.378 { 00:36:46.378 "name": "BaseBdev4", 00:36:46.378 "uuid": "7e05d289-d5d8-5ecb-9542-ee43c501aa88", 00:36:46.378 "is_configured": true, 00:36:46.378 "data_offset": 2048, 00:36:46.378 "data_size": 63488 00:36:46.378 } 00:36:46.378 ] 00:36:46.378 }' 00:36:46.378 23:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:46.378 23:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.636 [2024-12-09 23:18:27.170985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:46.636 [2024-12-09 23:18:27.171191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:46.636 [2024-12-09 23:18:27.173984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:46.636 [2024-12-09 23:18:27.174053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:46.636 [2024-12-09 23:18:27.174169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:46.636 [2024-12-09 23:18:27.174185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:36:46.636 { 00:36:46.636 "results": [ 00:36:46.636 { 00:36:46.636 "job": "raid_bdev1", 00:36:46.636 "core_mask": "0x1", 00:36:46.636 "workload": "randrw", 00:36:46.636 "percentage": 50, 00:36:46.636 "status": "finished", 00:36:46.636 "queue_depth": 1, 00:36:46.636 "io_size": 131072, 00:36:46.636 "runtime": 1.389149, 00:36:46.636 "iops": 10435.885567350946, 00:36:46.636 "mibps": 1304.4856959188683, 00:36:46.636 "io_failed": 0, 00:36:46.636 "io_timeout": 0, 00:36:46.636 "avg_latency_us": 92.98509410477669, 00:36:46.636 "min_latency_us": 25.0859437751004, 00:36:46.636 "max_latency_us": 1519.9614457831326 00:36:46.636 } 00:36:46.636 ], 00:36:46.636 "core_count": 1 00:36:46.636 } 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74874 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74874 ']' 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74874 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74874 00:36:46.636 killing process with pid 74874 00:36:46.636 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:46.637 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:46.637 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74874' 00:36:46.637 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74874 00:36:46.637 [2024-12-09 23:18:27.224040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:46.637 23:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74874 00:36:47.202 [2024-12-09 23:18:27.551147] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:48.579 23:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nlJJCHSAKH 00:36:48.579 23:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:36:48.579 23:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:36:48.579 23:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:36:48.579 23:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:36:48.579 23:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:48.579 23:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:36:48.579 ************************************ 00:36:48.579 23:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:36:48.579 00:36:48.579 real 0m4.753s 00:36:48.579 user 0m5.557s 00:36:48.579 sys 0m0.660s 00:36:48.579 23:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.579 23:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.579 END TEST raid_read_error_test 00:36:48.579 ************************************ 00:36:48.579 23:18:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:36:48.579 23:18:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:48.579 23:18:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.579 23:18:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:48.579 ************************************ 00:36:48.579 START TEST raid_write_error_test 00:36:48.579 ************************************ 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dVrkT2OZCU 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75024 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75024 00:36:48.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75024 ']' 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:48.579 23:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.579 [2024-12-09 23:18:28.976533] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:48.579 [2024-12-09 23:18:28.976675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75024 ] 00:36:48.579 [2024-12-09 23:18:29.145439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.844 [2024-12-09 23:18:29.273708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.108 [2024-12-09 23:18:29.496024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:49.108 [2024-12-09 23:18:29.496074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.367 BaseBdev1_malloc 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.367 true 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.367 [2024-12-09 23:18:29.917981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:36:49.367 [2024-12-09 23:18:29.918039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:49.367 [2024-12-09 23:18:29.918066] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:49.367 [2024-12-09 23:18:29.918080] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:49.367 [2024-12-09 23:18:29.920795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:49.367 [2024-12-09 23:18:29.920855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:49.367 BaseBdev1 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.367 BaseBdev2_malloc 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.367 true 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.367 [2024-12-09 23:18:29.976156] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:36:49.367 [2024-12-09 23:18:29.976219] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:49.367 [2024-12-09 23:18:29.976241] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:49.367 [2024-12-09 23:18:29.976256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:49.367 [2024-12-09 23:18:29.979023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:49.367 [2024-12-09 23:18:29.979072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:49.367 BaseBdev2 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.367 23:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.626 BaseBdev3_malloc 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.626 true 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.626 [2024-12-09 23:18:30.069688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:36:49.626 [2024-12-09 23:18:30.069874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:49.626 [2024-12-09 23:18:30.069936] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:49.626 [2024-12-09 23:18:30.070018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:49.626 [2024-12-09 23:18:30.072994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:49.626 [2024-12-09 23:18:30.073171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:49.626 BaseBdev3 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.626 BaseBdev4_malloc 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.626 true 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.626 [2024-12-09 23:18:30.136541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:36:49.626 [2024-12-09 23:18:30.136744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:49.626 [2024-12-09 23:18:30.136830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:49.626 [2024-12-09 23:18:30.136921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:49.626 [2024-12-09 23:18:30.140032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:49.626 [2024-12-09 23:18:30.140210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:36:49.626 BaseBdev4 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.626 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.626 [2024-12-09 23:18:30.148654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:49.626 [2024-12-09 23:18:30.151126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:49.626 [2024-12-09 23:18:30.151348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:49.626 [2024-12-09 23:18:30.151447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:49.626 [2024-12-09 23:18:30.151720] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:36:49.626 [2024-12-09 23:18:30.151740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:36:49.626 [2024-12-09 23:18:30.152036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:36:49.626 [2024-12-09 23:18:30.152218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:36:49.626 [2024-12-09 23:18:30.152230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:36:49.626 [2024-12-09 23:18:30.152446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:49.627 "name": "raid_bdev1", 00:36:49.627 "uuid": "8ce66497-757d-4e38-924f-f16ae1192aff", 00:36:49.627 "strip_size_kb": 0, 00:36:49.627 "state": "online", 00:36:49.627 "raid_level": "raid1", 00:36:49.627 "superblock": true, 00:36:49.627 "num_base_bdevs": 4, 00:36:49.627 "num_base_bdevs_discovered": 4, 00:36:49.627 "num_base_bdevs_operational": 4, 00:36:49.627 "base_bdevs_list": [ 00:36:49.627 { 00:36:49.627 "name": "BaseBdev1", 00:36:49.627 "uuid": "b23d1279-b104-524e-8a65-89cca0870333", 00:36:49.627 "is_configured": true, 00:36:49.627 "data_offset": 2048, 00:36:49.627 "data_size": 63488 00:36:49.627 }, 00:36:49.627 { 00:36:49.627 "name": "BaseBdev2", 00:36:49.627 "uuid": "08966670-99bc-52f6-bee9-c920f5bfeae6", 00:36:49.627 "is_configured": true, 00:36:49.627 "data_offset": 2048, 00:36:49.627 "data_size": 63488 00:36:49.627 }, 00:36:49.627 { 00:36:49.627 "name": "BaseBdev3", 00:36:49.627 "uuid": "fa80120d-2286-537b-b801-d02c7fc3489e", 00:36:49.627 "is_configured": true, 00:36:49.627 "data_offset": 2048, 00:36:49.627 "data_size": 63488 00:36:49.627 }, 00:36:49.627 { 00:36:49.627 "name": "BaseBdev4", 00:36:49.627 "uuid": "aacdceb6-d993-59de-bc81-d1a2416f7461", 00:36:49.627 "is_configured": true, 00:36:49.627 "data_offset": 2048, 00:36:49.627 "data_size": 63488 00:36:49.627 } 00:36:49.627 ] 00:36:49.627 }' 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:49.627 23:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:50.193 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:36:50.193 23:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:50.193 [2024-12-09 23:18:30.665506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:51.128 [2024-12-09 23:18:31.580428] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:36:51.128 [2024-12-09 23:18:31.580490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:51.128 [2024-12-09 23:18:31.580721] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:51.128 "name": "raid_bdev1", 00:36:51.128 "uuid": "8ce66497-757d-4e38-924f-f16ae1192aff", 00:36:51.128 "strip_size_kb": 0, 00:36:51.128 "state": "online", 00:36:51.128 "raid_level": "raid1", 00:36:51.128 "superblock": true, 00:36:51.128 "num_base_bdevs": 4, 00:36:51.128 "num_base_bdevs_discovered": 3, 00:36:51.128 "num_base_bdevs_operational": 3, 00:36:51.128 "base_bdevs_list": [ 00:36:51.128 { 00:36:51.128 "name": null, 00:36:51.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:51.128 "is_configured": false, 00:36:51.128 "data_offset": 0, 00:36:51.128 "data_size": 63488 00:36:51.128 }, 00:36:51.128 { 00:36:51.128 "name": "BaseBdev2", 00:36:51.128 "uuid": "08966670-99bc-52f6-bee9-c920f5bfeae6", 00:36:51.128 "is_configured": true, 00:36:51.128 "data_offset": 2048, 00:36:51.128 "data_size": 63488 00:36:51.128 }, 00:36:51.128 { 00:36:51.128 "name": "BaseBdev3", 00:36:51.128 "uuid": "fa80120d-2286-537b-b801-d02c7fc3489e", 00:36:51.128 "is_configured": true, 00:36:51.128 "data_offset": 2048, 00:36:51.128 "data_size": 63488 00:36:51.128 }, 00:36:51.128 { 00:36:51.128 "name": "BaseBdev4", 00:36:51.128 "uuid": "aacdceb6-d993-59de-bc81-d1a2416f7461", 00:36:51.128 "is_configured": true, 00:36:51.128 "data_offset": 2048, 00:36:51.128 "data_size": 63488 00:36:51.128 } 00:36:51.128 ] 00:36:51.128 }' 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:51.128 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:51.388 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:51.388 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.388 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:51.388 [2024-12-09 23:18:31.951370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:51.388 [2024-12-09 23:18:31.951419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:51.388 [2024-12-09 23:18:31.954136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:51.388 [2024-12-09 23:18:31.954184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:51.388 [2024-12-09 23:18:31.954295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:51.388 [2024-12-09 23:18:31.954311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:36:51.388 { 00:36:51.388 "results": [ 00:36:51.388 { 00:36:51.388 "job": "raid_bdev1", 00:36:51.388 "core_mask": "0x1", 00:36:51.388 "workload": "randrw", 00:36:51.388 "percentage": 50, 00:36:51.388 "status": "finished", 00:36:51.388 "queue_depth": 1, 00:36:51.388 "io_size": 131072, 00:36:51.388 "runtime": 1.285507, 00:36:51.388 "iops": 11481.851129554332, 00:36:51.388 "mibps": 1435.2313911942915, 00:36:51.388 "io_failed": 0, 00:36:51.388 "io_timeout": 0, 00:36:51.388 "avg_latency_us": 84.28769212350758, 00:36:51.388 "min_latency_us": 25.188755020080322, 00:36:51.388 "max_latency_us": 1592.340562248996 00:36:51.388 } 00:36:51.388 ], 00:36:51.388 "core_count": 1 00:36:51.388 } 00:36:51.388 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.388 23:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75024 00:36:51.388 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75024 ']' 00:36:51.388 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75024 00:36:51.388 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:36:51.388 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:51.388 23:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75024 00:36:51.388 killing process with pid 75024 00:36:51.388 23:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:51.388 23:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:51.388 23:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75024' 00:36:51.388 23:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75024 00:36:51.388 [2024-12-09 23:18:32.006431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:51.388 23:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75024 00:36:51.953 [2024-12-09 23:18:32.347885] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:53.330 23:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dVrkT2OZCU 00:36:53.330 23:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:36:53.330 23:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:36:53.330 23:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:36:53.330 23:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:36:53.330 23:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:53.330 23:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:36:53.330 23:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:36:53.330 00:36:53.330 real 0m4.728s 00:36:53.330 user 0m5.475s 00:36:53.330 sys 0m0.657s 00:36:53.330 23:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.330 ************************************ 00:36:53.330 END TEST raid_write_error_test 00:36:53.330 ************************************ 00:36:53.330 23:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.330 23:18:33 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:36:53.330 23:18:33 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:36:53.330 23:18:33 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:36:53.330 23:18:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:36:53.330 23:18:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.330 23:18:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:53.330 ************************************ 00:36:53.330 START TEST raid_rebuild_test 00:36:53.330 ************************************ 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75169 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75169 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75169 ']' 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:53.330 23:18:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.330 [2024-12-09 23:18:33.770205] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:36:53.330 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:53.330 Zero copy mechanism will not be used. 00:36:53.330 [2024-12-09 23:18:33.770557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75169 ] 00:36:53.330 [2024-12-09 23:18:33.947848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.589 [2024-12-09 23:18:34.105422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.848 [2024-12-09 23:18:34.324344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:53.848 [2024-12-09 23:18:34.324431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:54.107 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:54.107 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:36:54.107 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:54.107 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:54.107 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.107 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.107 BaseBdev1_malloc 00:36:54.107 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.107 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:54.107 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.107 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.107 [2024-12-09 23:18:34.692380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:54.107 [2024-12-09 23:18:34.692618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:54.107 [2024-12-09 23:18:34.692654] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:54.107 [2024-12-09 23:18:34.692671] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:54.107 [2024-12-09 23:18:34.695169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:54.108 [2024-12-09 23:18:34.695217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:54.108 BaseBdev1 00:36:54.108 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.108 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:54.108 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:54.108 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.108 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.367 BaseBdev2_malloc 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.367 [2024-12-09 23:18:34.750714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:54.367 [2024-12-09 23:18:34.750960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:54.367 [2024-12-09 23:18:34.751026] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:54.367 [2024-12-09 23:18:34.751127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:54.367 [2024-12-09 23:18:34.753871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:54.367 [2024-12-09 23:18:34.754030] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:54.367 BaseBdev2 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.367 spare_malloc 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.367 spare_delay 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.367 [2024-12-09 23:18:34.832969] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:54.367 [2024-12-09 23:18:34.833038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:54.367 [2024-12-09 23:18:34.833060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:36:54.367 [2024-12-09 23:18:34.833074] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:54.367 [2024-12-09 23:18:34.835501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:54.367 [2024-12-09 23:18:34.835692] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:54.367 spare 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.367 [2024-12-09 23:18:34.844993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:54.367 [2024-12-09 23:18:34.847055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:54.367 [2024-12-09 23:18:34.847285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:36:54.367 [2024-12-09 23:18:34.847310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:36:54.367 [2024-12-09 23:18:34.847619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:54.367 [2024-12-09 23:18:34.847779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:36:54.367 [2024-12-09 23:18:34.847794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:36:54.367 [2024-12-09 23:18:34.847965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.367 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:54.367 "name": "raid_bdev1", 00:36:54.367 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:36:54.367 "strip_size_kb": 0, 00:36:54.367 "state": "online", 00:36:54.367 "raid_level": "raid1", 00:36:54.367 "superblock": false, 00:36:54.367 "num_base_bdevs": 2, 00:36:54.367 "num_base_bdevs_discovered": 2, 00:36:54.367 "num_base_bdevs_operational": 2, 00:36:54.367 "base_bdevs_list": [ 00:36:54.367 { 00:36:54.367 "name": "BaseBdev1", 00:36:54.367 "uuid": "515e58c8-0609-52ea-8fa2-47c1572ab9b8", 00:36:54.367 "is_configured": true, 00:36:54.367 "data_offset": 0, 00:36:54.367 "data_size": 65536 00:36:54.368 }, 00:36:54.368 { 00:36:54.368 "name": "BaseBdev2", 00:36:54.368 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:36:54.368 "is_configured": true, 00:36:54.368 "data_offset": 0, 00:36:54.368 "data_size": 65536 00:36:54.368 } 00:36:54.368 ] 00:36:54.368 }' 00:36:54.368 23:18:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:54.368 23:18:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.935 [2024-12-09 23:18:35.304696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:54.935 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:55.194 [2024-12-09 23:18:35.600000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:55.194 /dev/nbd0 00:36:55.194 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:55.195 1+0 records in 00:36:55.195 1+0 records out 00:36:55.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405582 s, 10.1 MB/s 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:36:55.195 23:18:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:37:00.463 65536+0 records in 00:37:00.463 65536+0 records out 00:37:00.463 33554432 bytes (34 MB, 32 MiB) copied, 5.19666 s, 6.5 MB/s 00:37:00.463 23:18:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:37:00.463 23:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:00.463 23:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:00.463 23:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:00.463 23:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:37:00.463 23:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:00.463 23:18:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:00.463 [2024-12-09 23:18:41.061436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:00.463 23:18:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:00.463 23:18:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:00.463 23:18:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:00.463 23:18:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:00.463 23:18:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:00.463 23:18:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.721 [2024-12-09 23:18:41.105479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:00.721 "name": "raid_bdev1", 00:37:00.721 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:37:00.721 "strip_size_kb": 0, 00:37:00.721 "state": "online", 00:37:00.721 "raid_level": "raid1", 00:37:00.721 "superblock": false, 00:37:00.721 "num_base_bdevs": 2, 00:37:00.721 "num_base_bdevs_discovered": 1, 00:37:00.721 "num_base_bdevs_operational": 1, 00:37:00.721 "base_bdevs_list": [ 00:37:00.721 { 00:37:00.721 "name": null, 00:37:00.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.721 "is_configured": false, 00:37:00.721 "data_offset": 0, 00:37:00.721 "data_size": 65536 00:37:00.721 }, 00:37:00.721 { 00:37:00.721 "name": "BaseBdev2", 00:37:00.721 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:37:00.721 "is_configured": true, 00:37:00.721 "data_offset": 0, 00:37:00.721 "data_size": 65536 00:37:00.721 } 00:37:00.721 ] 00:37:00.721 }' 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:00.721 23:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.979 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:00.979 23:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.979 23:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.979 [2024-12-09 23:18:41.556809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:00.979 [2024-12-09 23:18:41.574444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:37:00.979 23:18:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.979 23:18:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:00.979 [2024-12-09 23:18:41.576554] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:02.353 "name": "raid_bdev1", 00:37:02.353 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:37:02.353 "strip_size_kb": 0, 00:37:02.353 "state": "online", 00:37:02.353 "raid_level": "raid1", 00:37:02.353 "superblock": false, 00:37:02.353 "num_base_bdevs": 2, 00:37:02.353 "num_base_bdevs_discovered": 2, 00:37:02.353 "num_base_bdevs_operational": 2, 00:37:02.353 "process": { 00:37:02.353 "type": "rebuild", 00:37:02.353 "target": "spare", 00:37:02.353 "progress": { 00:37:02.353 "blocks": 20480, 00:37:02.353 "percent": 31 00:37:02.353 } 00:37:02.353 }, 00:37:02.353 "base_bdevs_list": [ 00:37:02.353 { 00:37:02.353 "name": "spare", 00:37:02.353 "uuid": "e6b49e3b-f34a-560d-8aa5-ddbdddd1bbf3", 00:37:02.353 "is_configured": true, 00:37:02.353 "data_offset": 0, 00:37:02.353 "data_size": 65536 00:37:02.353 }, 00:37:02.353 { 00:37:02.353 "name": "BaseBdev2", 00:37:02.353 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:37:02.353 "is_configured": true, 00:37:02.353 "data_offset": 0, 00:37:02.353 "data_size": 65536 00:37:02.353 } 00:37:02.353 ] 00:37:02.353 }' 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.353 23:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.354 [2024-12-09 23:18:42.724088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:02.354 [2024-12-09 23:18:42.782338] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:02.354 [2024-12-09 23:18:42.782641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:02.354 [2024-12-09 23:18:42.782743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:02.354 [2024-12-09 23:18:42.782790] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:02.354 "name": "raid_bdev1", 00:37:02.354 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:37:02.354 "strip_size_kb": 0, 00:37:02.354 "state": "online", 00:37:02.354 "raid_level": "raid1", 00:37:02.354 "superblock": false, 00:37:02.354 "num_base_bdevs": 2, 00:37:02.354 "num_base_bdevs_discovered": 1, 00:37:02.354 "num_base_bdevs_operational": 1, 00:37:02.354 "base_bdevs_list": [ 00:37:02.354 { 00:37:02.354 "name": null, 00:37:02.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.354 "is_configured": false, 00:37:02.354 "data_offset": 0, 00:37:02.354 "data_size": 65536 00:37:02.354 }, 00:37:02.354 { 00:37:02.354 "name": "BaseBdev2", 00:37:02.354 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:37:02.354 "is_configured": true, 00:37:02.354 "data_offset": 0, 00:37:02.354 "data_size": 65536 00:37:02.354 } 00:37:02.354 ] 00:37:02.354 }' 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:02.354 23:18:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.612 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:02.612 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:02.612 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:02.612 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:02.612 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:02.612 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.612 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.612 23:18:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.612 23:18:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:02.870 "name": "raid_bdev1", 00:37:02.870 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:37:02.870 "strip_size_kb": 0, 00:37:02.870 "state": "online", 00:37:02.870 "raid_level": "raid1", 00:37:02.870 "superblock": false, 00:37:02.870 "num_base_bdevs": 2, 00:37:02.870 "num_base_bdevs_discovered": 1, 00:37:02.870 "num_base_bdevs_operational": 1, 00:37:02.870 "base_bdevs_list": [ 00:37:02.870 { 00:37:02.870 "name": null, 00:37:02.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.870 "is_configured": false, 00:37:02.870 "data_offset": 0, 00:37:02.870 "data_size": 65536 00:37:02.870 }, 00:37:02.870 { 00:37:02.870 "name": "BaseBdev2", 00:37:02.870 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:37:02.870 "is_configured": true, 00:37:02.870 "data_offset": 0, 00:37:02.870 "data_size": 65536 00:37:02.870 } 00:37:02.870 ] 00:37:02.870 }' 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.870 [2024-12-09 23:18:43.363500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:02.870 [2024-12-09 23:18:43.379901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.870 23:18:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:37:02.870 [2024-12-09 23:18:43.382093] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:03.826 "name": "raid_bdev1", 00:37:03.826 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:37:03.826 "strip_size_kb": 0, 00:37:03.826 "state": "online", 00:37:03.826 "raid_level": "raid1", 00:37:03.826 "superblock": false, 00:37:03.826 "num_base_bdevs": 2, 00:37:03.826 "num_base_bdevs_discovered": 2, 00:37:03.826 "num_base_bdevs_operational": 2, 00:37:03.826 "process": { 00:37:03.826 "type": "rebuild", 00:37:03.826 "target": "spare", 00:37:03.826 "progress": { 00:37:03.826 "blocks": 20480, 00:37:03.826 "percent": 31 00:37:03.826 } 00:37:03.826 }, 00:37:03.826 "base_bdevs_list": [ 00:37:03.826 { 00:37:03.826 "name": "spare", 00:37:03.826 "uuid": "e6b49e3b-f34a-560d-8aa5-ddbdddd1bbf3", 00:37:03.826 "is_configured": true, 00:37:03.826 "data_offset": 0, 00:37:03.826 "data_size": 65536 00:37:03.826 }, 00:37:03.826 { 00:37:03.826 "name": "BaseBdev2", 00:37:03.826 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:37:03.826 "is_configured": true, 00:37:03.826 "data_offset": 0, 00:37:03.826 "data_size": 65536 00:37:03.826 } 00:37:03.826 ] 00:37:03.826 }' 00:37:03.826 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=374 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.085 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:04.085 "name": "raid_bdev1", 00:37:04.085 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:37:04.085 "strip_size_kb": 0, 00:37:04.085 "state": "online", 00:37:04.085 "raid_level": "raid1", 00:37:04.085 "superblock": false, 00:37:04.085 "num_base_bdevs": 2, 00:37:04.085 "num_base_bdevs_discovered": 2, 00:37:04.085 "num_base_bdevs_operational": 2, 00:37:04.085 "process": { 00:37:04.085 "type": "rebuild", 00:37:04.085 "target": "spare", 00:37:04.085 "progress": { 00:37:04.085 "blocks": 22528, 00:37:04.085 "percent": 34 00:37:04.085 } 00:37:04.085 }, 00:37:04.085 "base_bdevs_list": [ 00:37:04.085 { 00:37:04.085 "name": "spare", 00:37:04.085 "uuid": "e6b49e3b-f34a-560d-8aa5-ddbdddd1bbf3", 00:37:04.085 "is_configured": true, 00:37:04.085 "data_offset": 0, 00:37:04.085 "data_size": 65536 00:37:04.085 }, 00:37:04.085 { 00:37:04.085 "name": "BaseBdev2", 00:37:04.086 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:37:04.086 "is_configured": true, 00:37:04.086 "data_offset": 0, 00:37:04.086 "data_size": 65536 00:37:04.086 } 00:37:04.086 ] 00:37:04.086 }' 00:37:04.086 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:04.086 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:04.086 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:04.086 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:04.086 23:18:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:05.022 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:05.022 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:05.022 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:05.022 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:05.022 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:05.022 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:05.022 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:05.022 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.022 23:18:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.022 23:18:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:05.280 23:18:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.280 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:05.280 "name": "raid_bdev1", 00:37:05.280 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:37:05.280 "strip_size_kb": 0, 00:37:05.280 "state": "online", 00:37:05.280 "raid_level": "raid1", 00:37:05.280 "superblock": false, 00:37:05.280 "num_base_bdevs": 2, 00:37:05.280 "num_base_bdevs_discovered": 2, 00:37:05.280 "num_base_bdevs_operational": 2, 00:37:05.280 "process": { 00:37:05.280 "type": "rebuild", 00:37:05.280 "target": "spare", 00:37:05.280 "progress": { 00:37:05.280 "blocks": 45056, 00:37:05.280 "percent": 68 00:37:05.280 } 00:37:05.280 }, 00:37:05.280 "base_bdevs_list": [ 00:37:05.280 { 00:37:05.280 "name": "spare", 00:37:05.280 "uuid": "e6b49e3b-f34a-560d-8aa5-ddbdddd1bbf3", 00:37:05.280 "is_configured": true, 00:37:05.280 "data_offset": 0, 00:37:05.280 "data_size": 65536 00:37:05.280 }, 00:37:05.280 { 00:37:05.280 "name": "BaseBdev2", 00:37:05.280 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:37:05.280 "is_configured": true, 00:37:05.280 "data_offset": 0, 00:37:05.280 "data_size": 65536 00:37:05.280 } 00:37:05.280 ] 00:37:05.280 }' 00:37:05.280 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:05.280 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:05.280 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:05.280 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:05.280 23:18:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:06.215 [2024-12-09 23:18:46.597317] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:06.215 [2024-12-09 23:18:46.597425] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:06.215 [2024-12-09 23:18:46.597473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:06.215 23:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:06.474 "name": "raid_bdev1", 00:37:06.474 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:37:06.474 "strip_size_kb": 0, 00:37:06.474 "state": "online", 00:37:06.474 "raid_level": "raid1", 00:37:06.474 "superblock": false, 00:37:06.474 "num_base_bdevs": 2, 00:37:06.474 "num_base_bdevs_discovered": 2, 00:37:06.474 "num_base_bdevs_operational": 2, 00:37:06.474 "base_bdevs_list": [ 00:37:06.474 { 00:37:06.474 "name": "spare", 00:37:06.474 "uuid": "e6b49e3b-f34a-560d-8aa5-ddbdddd1bbf3", 00:37:06.474 "is_configured": true, 00:37:06.474 "data_offset": 0, 00:37:06.474 "data_size": 65536 00:37:06.474 }, 00:37:06.474 { 00:37:06.474 "name": "BaseBdev2", 00:37:06.474 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:37:06.474 "is_configured": true, 00:37:06.474 "data_offset": 0, 00:37:06.474 "data_size": 65536 00:37:06.474 } 00:37:06.474 ] 00:37:06.474 }' 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:06.474 "name": "raid_bdev1", 00:37:06.474 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:37:06.474 "strip_size_kb": 0, 00:37:06.474 "state": "online", 00:37:06.474 "raid_level": "raid1", 00:37:06.474 "superblock": false, 00:37:06.474 "num_base_bdevs": 2, 00:37:06.474 "num_base_bdevs_discovered": 2, 00:37:06.474 "num_base_bdevs_operational": 2, 00:37:06.474 "base_bdevs_list": [ 00:37:06.474 { 00:37:06.474 "name": "spare", 00:37:06.474 "uuid": "e6b49e3b-f34a-560d-8aa5-ddbdddd1bbf3", 00:37:06.474 "is_configured": true, 00:37:06.474 "data_offset": 0, 00:37:06.474 "data_size": 65536 00:37:06.474 }, 00:37:06.474 { 00:37:06.474 "name": "BaseBdev2", 00:37:06.474 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:37:06.474 "is_configured": true, 00:37:06.474 "data_offset": 0, 00:37:06.474 "data_size": 65536 00:37:06.474 } 00:37:06.474 ] 00:37:06.474 }' 00:37:06.474 23:18:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:06.474 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:06.475 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:06.475 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.475 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:06.475 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.475 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:06.475 "name": "raid_bdev1", 00:37:06.475 "uuid": "a40ead31-fda5-4671-87bd-cc0c450d12df", 00:37:06.475 "strip_size_kb": 0, 00:37:06.475 "state": "online", 00:37:06.475 "raid_level": "raid1", 00:37:06.475 "superblock": false, 00:37:06.475 "num_base_bdevs": 2, 00:37:06.475 "num_base_bdevs_discovered": 2, 00:37:06.475 "num_base_bdevs_operational": 2, 00:37:06.475 "base_bdevs_list": [ 00:37:06.475 { 00:37:06.475 "name": "spare", 00:37:06.475 "uuid": "e6b49e3b-f34a-560d-8aa5-ddbdddd1bbf3", 00:37:06.475 "is_configured": true, 00:37:06.475 "data_offset": 0, 00:37:06.475 "data_size": 65536 00:37:06.475 }, 00:37:06.475 { 00:37:06.475 "name": "BaseBdev2", 00:37:06.475 "uuid": "7ab0bec8-4694-5245-b6d6-f7b6e515e8b9", 00:37:06.475 "is_configured": true, 00:37:06.475 "data_offset": 0, 00:37:06.475 "data_size": 65536 00:37:06.475 } 00:37:06.475 ] 00:37:06.475 }' 00:37:06.475 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:06.475 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.111 [2024-12-09 23:18:47.504696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:07.111 [2024-12-09 23:18:47.504871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:07.111 [2024-12-09 23:18:47.504985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:07.111 [2024-12-09 23:18:47.505058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:07.111 [2024-12-09 23:18:47.505070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:07.111 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:07.371 /dev/nbd0 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:07.371 1+0 records in 00:37:07.371 1+0 records out 00:37:07.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00396016 s, 1.0 MB/s 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:07.371 23:18:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:37:07.640 /dev/nbd1 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:07.640 1+0 records in 00:37:07.640 1+0 records out 00:37:07.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390742 s, 10.5 MB/s 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:37:07.640 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:07.641 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:07.906 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:07.906 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:07.906 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:07.906 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:07.906 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:07.906 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:07.906 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:07.906 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:07.906 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:07.906 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:37:08.165 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:08.165 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75169 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75169 ']' 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75169 00:37:08.166 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:37:08.425 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:08.425 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75169 00:37:08.425 killing process with pid 75169 00:37:08.425 Received shutdown signal, test time was about 60.000000 seconds 00:37:08.425 00:37:08.425 Latency(us) 00:37:08.425 [2024-12-09T23:18:49.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.425 [2024-12-09T23:18:49.061Z] =================================================================================================================== 00:37:08.425 [2024-12-09T23:18:49.061Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:08.425 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:08.425 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:08.425 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75169' 00:37:08.425 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75169 00:37:08.425 [2024-12-09 23:18:48.841423] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:08.425 23:18:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75169 00:37:08.684 [2024-12-09 23:18:49.115421] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:09.624 23:18:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:37:09.624 00:37:09.624 real 0m16.588s 00:37:09.625 user 0m18.035s 00:37:09.625 sys 0m3.682s 00:37:09.625 ************************************ 00:37:09.625 END TEST raid_rebuild_test 00:37:09.625 ************************************ 00:37:09.625 23:18:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:09.625 23:18:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:09.886 23:18:50 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:37:09.886 23:18:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:37:09.886 23:18:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:09.886 23:18:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:09.886 ************************************ 00:37:09.886 START TEST raid_rebuild_test_sb 00:37:09.886 ************************************ 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75600 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75600 00:37:09.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75600 ']' 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:09.886 23:18:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.886 [2024-12-09 23:18:50.448966] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:37:09.886 [2024-12-09 23:18:50.449267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:37:09.886 Zero copy mechanism will not be used. 00:37:09.886 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75600 ] 00:37:10.144 [2024-12-09 23:18:50.625059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.144 [2024-12-09 23:18:50.759382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:10.403 [2024-12-09 23:18:50.972262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:10.403 [2024-12-09 23:18:50.972325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.971 BaseBdev1_malloc 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.971 [2024-12-09 23:18:51.346840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:10.971 [2024-12-09 23:18:51.347207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:10.971 [2024-12-09 23:18:51.347251] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:10.971 [2024-12-09 23:18:51.347267] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:10.971 [2024-12-09 23:18:51.349873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:10.971 [2024-12-09 23:18:51.350094] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:10.971 BaseBdev1 00:37:10.971 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.972 BaseBdev2_malloc 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.972 [2024-12-09 23:18:51.405330] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:10.972 [2024-12-09 23:18:51.405588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:10.972 [2024-12-09 23:18:51.405663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:10.972 [2024-12-09 23:18:51.405730] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:10.972 [2024-12-09 23:18:51.408341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:10.972 [2024-12-09 23:18:51.408613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:10.972 BaseBdev2 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.972 spare_malloc 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.972 spare_delay 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.972 [2024-12-09 23:18:51.482390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:10.972 [2024-12-09 23:18:51.482982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:10.972 [2024-12-09 23:18:51.483118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:37:10.972 [2024-12-09 23:18:51.483142] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:10.972 [2024-12-09 23:18:51.485828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:10.972 [2024-12-09 23:18:51.485869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:10.972 spare 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.972 [2024-12-09 23:18:51.490557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:10.972 [2024-12-09 23:18:51.492616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:10.972 [2024-12-09 23:18:51.492789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:10.972 [2024-12-09 23:18:51.492806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:10.972 [2024-12-09 23:18:51.493059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:10.972 [2024-12-09 23:18:51.493222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:10.972 [2024-12-09 23:18:51.493232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:10.972 [2024-12-09 23:18:51.493510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:10.972 "name": "raid_bdev1", 00:37:10.972 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:10.972 "strip_size_kb": 0, 00:37:10.972 "state": "online", 00:37:10.972 "raid_level": "raid1", 00:37:10.972 "superblock": true, 00:37:10.972 "num_base_bdevs": 2, 00:37:10.972 "num_base_bdevs_discovered": 2, 00:37:10.972 "num_base_bdevs_operational": 2, 00:37:10.972 "base_bdevs_list": [ 00:37:10.972 { 00:37:10.972 "name": "BaseBdev1", 00:37:10.972 "uuid": "10ccf0e4-6c0a-528b-9023-cd6c22c71fb7", 00:37:10.972 "is_configured": true, 00:37:10.972 "data_offset": 2048, 00:37:10.972 "data_size": 63488 00:37:10.972 }, 00:37:10.972 { 00:37:10.972 "name": "BaseBdev2", 00:37:10.972 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:10.972 "is_configured": true, 00:37:10.972 "data_offset": 2048, 00:37:10.972 "data_size": 63488 00:37:10.972 } 00:37:10.972 ] 00:37:10.972 }' 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:10.972 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:37:11.540 [2024-12-09 23:18:51.910746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:37:11.540 23:18:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:37:11.540 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:11.540 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:37:11.540 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:11.540 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:11.540 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:11.540 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:37:11.540 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:11.540 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:11.540 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:11.798 [2024-12-09 23:18:52.182531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:11.798 /dev/nbd0 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:11.798 1+0 records in 00:37:11.798 1+0 records out 00:37:11.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000978071 s, 4.2 MB/s 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:37:11.798 23:18:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:37:15.992 63488+0 records in 00:37:15.992 63488+0 records out 00:37:15.992 32505856 bytes (33 MB, 31 MiB) copied, 4.01055 s, 8.1 MB/s 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:15.992 [2024-12-09 23:18:56.501694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:15.992 [2024-12-09 23:18:56.521791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:15.992 "name": "raid_bdev1", 00:37:15.992 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:15.992 "strip_size_kb": 0, 00:37:15.992 "state": "online", 00:37:15.992 "raid_level": "raid1", 00:37:15.992 "superblock": true, 00:37:15.992 "num_base_bdevs": 2, 00:37:15.992 "num_base_bdevs_discovered": 1, 00:37:15.992 "num_base_bdevs_operational": 1, 00:37:15.992 "base_bdevs_list": [ 00:37:15.992 { 00:37:15.992 "name": null, 00:37:15.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:15.992 "is_configured": false, 00:37:15.992 "data_offset": 0, 00:37:15.992 "data_size": 63488 00:37:15.992 }, 00:37:15.992 { 00:37:15.992 "name": "BaseBdev2", 00:37:15.992 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:15.992 "is_configured": true, 00:37:15.992 "data_offset": 2048, 00:37:15.992 "data_size": 63488 00:37:15.992 } 00:37:15.992 ] 00:37:15.992 }' 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:15.992 23:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:16.559 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:16.559 23:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.559 23:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:16.559 [2024-12-09 23:18:56.937219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:16.559 [2024-12-09 23:18:56.955026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:37:16.559 23:18:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.559 23:18:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:16.559 [2024-12-09 23:18:56.957348] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:17.495 23:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:17.495 23:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:17.495 23:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:17.495 23:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:17.495 23:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:17.495 23:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.495 23:18:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.495 23:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.495 23:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.495 23:18:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.495 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:17.495 "name": "raid_bdev1", 00:37:17.495 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:17.495 "strip_size_kb": 0, 00:37:17.495 "state": "online", 00:37:17.495 "raid_level": "raid1", 00:37:17.495 "superblock": true, 00:37:17.495 "num_base_bdevs": 2, 00:37:17.495 "num_base_bdevs_discovered": 2, 00:37:17.495 "num_base_bdevs_operational": 2, 00:37:17.495 "process": { 00:37:17.495 "type": "rebuild", 00:37:17.495 "target": "spare", 00:37:17.495 "progress": { 00:37:17.495 "blocks": 20480, 00:37:17.495 "percent": 32 00:37:17.495 } 00:37:17.495 }, 00:37:17.495 "base_bdevs_list": [ 00:37:17.495 { 00:37:17.495 "name": "spare", 00:37:17.495 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:17.495 "is_configured": true, 00:37:17.495 "data_offset": 2048, 00:37:17.495 "data_size": 63488 00:37:17.495 }, 00:37:17.495 { 00:37:17.495 "name": "BaseBdev2", 00:37:17.495 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:17.495 "is_configured": true, 00:37:17.495 "data_offset": 2048, 00:37:17.495 "data_size": 63488 00:37:17.495 } 00:37:17.495 ] 00:37:17.495 }' 00:37:17.495 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:17.495 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:17.495 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:17.495 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:17.495 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:17.495 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.495 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.495 [2024-12-09 23:18:58.096882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:17.754 [2024-12-09 23:18:58.163325] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:17.754 [2024-12-09 23:18:58.163478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:17.754 [2024-12-09 23:18:58.163506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:17.754 [2024-12-09 23:18:58.163526] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.754 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.755 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:17.755 "name": "raid_bdev1", 00:37:17.755 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:17.755 "strip_size_kb": 0, 00:37:17.755 "state": "online", 00:37:17.755 "raid_level": "raid1", 00:37:17.755 "superblock": true, 00:37:17.755 "num_base_bdevs": 2, 00:37:17.755 "num_base_bdevs_discovered": 1, 00:37:17.755 "num_base_bdevs_operational": 1, 00:37:17.755 "base_bdevs_list": [ 00:37:17.755 { 00:37:17.755 "name": null, 00:37:17.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:17.755 "is_configured": false, 00:37:17.755 "data_offset": 0, 00:37:17.755 "data_size": 63488 00:37:17.755 }, 00:37:17.755 { 00:37:17.755 "name": "BaseBdev2", 00:37:17.755 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:17.755 "is_configured": true, 00:37:17.755 "data_offset": 2048, 00:37:17.755 "data_size": 63488 00:37:17.755 } 00:37:17.755 ] 00:37:17.755 }' 00:37:17.755 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:17.755 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:18.013 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:18.013 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:18.013 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:18.013 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:18.013 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:18.013 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.013 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:18.013 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.013 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:18.272 "name": "raid_bdev1", 00:37:18.272 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:18.272 "strip_size_kb": 0, 00:37:18.272 "state": "online", 00:37:18.272 "raid_level": "raid1", 00:37:18.272 "superblock": true, 00:37:18.272 "num_base_bdevs": 2, 00:37:18.272 "num_base_bdevs_discovered": 1, 00:37:18.272 "num_base_bdevs_operational": 1, 00:37:18.272 "base_bdevs_list": [ 00:37:18.272 { 00:37:18.272 "name": null, 00:37:18.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:18.272 "is_configured": false, 00:37:18.272 "data_offset": 0, 00:37:18.272 "data_size": 63488 00:37:18.272 }, 00:37:18.272 { 00:37:18.272 "name": "BaseBdev2", 00:37:18.272 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:18.272 "is_configured": true, 00:37:18.272 "data_offset": 2048, 00:37:18.272 "data_size": 63488 00:37:18.272 } 00:37:18.272 ] 00:37:18.272 }' 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:18.272 [2024-12-09 23:18:58.771710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:18.272 [2024-12-09 23:18:58.787580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.272 [2024-12-09 23:18:58.789682] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:18.272 23:18:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:37:19.208 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:19.208 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:19.208 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:19.208 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:19.208 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:19.208 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:19.208 23:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.208 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.208 23:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:19.208 23:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:19.467 "name": "raid_bdev1", 00:37:19.467 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:19.467 "strip_size_kb": 0, 00:37:19.467 "state": "online", 00:37:19.467 "raid_level": "raid1", 00:37:19.467 "superblock": true, 00:37:19.467 "num_base_bdevs": 2, 00:37:19.467 "num_base_bdevs_discovered": 2, 00:37:19.467 "num_base_bdevs_operational": 2, 00:37:19.467 "process": { 00:37:19.467 "type": "rebuild", 00:37:19.467 "target": "spare", 00:37:19.467 "progress": { 00:37:19.467 "blocks": 20480, 00:37:19.467 "percent": 32 00:37:19.467 } 00:37:19.467 }, 00:37:19.467 "base_bdevs_list": [ 00:37:19.467 { 00:37:19.467 "name": "spare", 00:37:19.467 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:19.467 "is_configured": true, 00:37:19.467 "data_offset": 2048, 00:37:19.467 "data_size": 63488 00:37:19.467 }, 00:37:19.467 { 00:37:19.467 "name": "BaseBdev2", 00:37:19.467 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:19.467 "is_configured": true, 00:37:19.467 "data_offset": 2048, 00:37:19.467 "data_size": 63488 00:37:19.467 } 00:37:19.467 ] 00:37:19.467 }' 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:37:19.467 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=389 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:19.467 23:18:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.467 23:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:19.467 "name": "raid_bdev1", 00:37:19.467 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:19.467 "strip_size_kb": 0, 00:37:19.467 "state": "online", 00:37:19.467 "raid_level": "raid1", 00:37:19.467 "superblock": true, 00:37:19.467 "num_base_bdevs": 2, 00:37:19.467 "num_base_bdevs_discovered": 2, 00:37:19.467 "num_base_bdevs_operational": 2, 00:37:19.467 "process": { 00:37:19.467 "type": "rebuild", 00:37:19.467 "target": "spare", 00:37:19.467 "progress": { 00:37:19.467 "blocks": 22528, 00:37:19.467 "percent": 35 00:37:19.467 } 00:37:19.467 }, 00:37:19.467 "base_bdevs_list": [ 00:37:19.467 { 00:37:19.467 "name": "spare", 00:37:19.467 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:19.467 "is_configured": true, 00:37:19.467 "data_offset": 2048, 00:37:19.467 "data_size": 63488 00:37:19.467 }, 00:37:19.467 { 00:37:19.467 "name": "BaseBdev2", 00:37:19.467 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:19.467 "is_configured": true, 00:37:19.467 "data_offset": 2048, 00:37:19.467 "data_size": 63488 00:37:19.467 } 00:37:19.467 ] 00:37:19.467 }' 00:37:19.467 23:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:19.467 23:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:19.467 23:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:19.467 23:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:19.467 23:19:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:20.844 "name": "raid_bdev1", 00:37:20.844 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:20.844 "strip_size_kb": 0, 00:37:20.844 "state": "online", 00:37:20.844 "raid_level": "raid1", 00:37:20.844 "superblock": true, 00:37:20.844 "num_base_bdevs": 2, 00:37:20.844 "num_base_bdevs_discovered": 2, 00:37:20.844 "num_base_bdevs_operational": 2, 00:37:20.844 "process": { 00:37:20.844 "type": "rebuild", 00:37:20.844 "target": "spare", 00:37:20.844 "progress": { 00:37:20.844 "blocks": 47104, 00:37:20.844 "percent": 74 00:37:20.844 } 00:37:20.844 }, 00:37:20.844 "base_bdevs_list": [ 00:37:20.844 { 00:37:20.844 "name": "spare", 00:37:20.844 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:20.844 "is_configured": true, 00:37:20.844 "data_offset": 2048, 00:37:20.844 "data_size": 63488 00:37:20.844 }, 00:37:20.844 { 00:37:20.844 "name": "BaseBdev2", 00:37:20.844 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:20.844 "is_configured": true, 00:37:20.844 "data_offset": 2048, 00:37:20.844 "data_size": 63488 00:37:20.844 } 00:37:20.844 ] 00:37:20.844 }' 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:20.844 23:19:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:21.411 [2024-12-09 23:19:01.904518] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:21.411 [2024-12-09 23:19:01.904612] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:21.411 [2024-12-09 23:19:01.904752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.671 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:21.929 "name": "raid_bdev1", 00:37:21.929 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:21.929 "strip_size_kb": 0, 00:37:21.929 "state": "online", 00:37:21.929 "raid_level": "raid1", 00:37:21.929 "superblock": true, 00:37:21.929 "num_base_bdevs": 2, 00:37:21.929 "num_base_bdevs_discovered": 2, 00:37:21.929 "num_base_bdevs_operational": 2, 00:37:21.929 "base_bdevs_list": [ 00:37:21.929 { 00:37:21.929 "name": "spare", 00:37:21.929 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:21.929 "is_configured": true, 00:37:21.929 "data_offset": 2048, 00:37:21.929 "data_size": 63488 00:37:21.929 }, 00:37:21.929 { 00:37:21.929 "name": "BaseBdev2", 00:37:21.929 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:21.929 "is_configured": true, 00:37:21.929 "data_offset": 2048, 00:37:21.929 "data_size": 63488 00:37:21.929 } 00:37:21.929 ] 00:37:21.929 }' 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.929 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:21.930 "name": "raid_bdev1", 00:37:21.930 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:21.930 "strip_size_kb": 0, 00:37:21.930 "state": "online", 00:37:21.930 "raid_level": "raid1", 00:37:21.930 "superblock": true, 00:37:21.930 "num_base_bdevs": 2, 00:37:21.930 "num_base_bdevs_discovered": 2, 00:37:21.930 "num_base_bdevs_operational": 2, 00:37:21.930 "base_bdevs_list": [ 00:37:21.930 { 00:37:21.930 "name": "spare", 00:37:21.930 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:21.930 "is_configured": true, 00:37:21.930 "data_offset": 2048, 00:37:21.930 "data_size": 63488 00:37:21.930 }, 00:37:21.930 { 00:37:21.930 "name": "BaseBdev2", 00:37:21.930 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:21.930 "is_configured": true, 00:37:21.930 "data_offset": 2048, 00:37:21.930 "data_size": 63488 00:37:21.930 } 00:37:21.930 ] 00:37:21.930 }' 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:21.930 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:22.188 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.188 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:22.188 "name": "raid_bdev1", 00:37:22.188 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:22.188 "strip_size_kb": 0, 00:37:22.188 "state": "online", 00:37:22.188 "raid_level": "raid1", 00:37:22.188 "superblock": true, 00:37:22.188 "num_base_bdevs": 2, 00:37:22.188 "num_base_bdevs_discovered": 2, 00:37:22.188 "num_base_bdevs_operational": 2, 00:37:22.188 "base_bdevs_list": [ 00:37:22.188 { 00:37:22.188 "name": "spare", 00:37:22.188 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:22.188 "is_configured": true, 00:37:22.188 "data_offset": 2048, 00:37:22.188 "data_size": 63488 00:37:22.188 }, 00:37:22.188 { 00:37:22.188 "name": "BaseBdev2", 00:37:22.188 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:22.188 "is_configured": true, 00:37:22.188 "data_offset": 2048, 00:37:22.188 "data_size": 63488 00:37:22.188 } 00:37:22.188 ] 00:37:22.188 }' 00:37:22.188 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:22.188 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:22.447 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:22.447 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.447 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:22.447 [2024-12-09 23:19:02.983981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:22.447 [2024-12-09 23:19:02.984021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:22.447 [2024-12-09 23:19:02.984109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:22.447 [2024-12-09 23:19:02.984180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:22.447 [2024-12-09 23:19:02.984195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:22.447 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.447 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:22.447 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.447 23:19:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:37:22.447 23:19:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:22.447 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:22.706 /dev/nbd0 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:22.706 1+0 records in 00:37:22.706 1+0 records out 00:37:22.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040152 s, 10.2 MB/s 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:22.706 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:37:22.964 /dev/nbd1 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:22.964 1+0 records in 00:37:22.964 1+0 records out 00:37:22.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032124 s, 12.8 MB/s 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:22.964 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:23.222 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:37:23.222 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:23.222 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:23.222 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:23.222 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:37:23.222 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:23.222 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:23.481 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:23.481 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:23.481 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:23.481 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:23.481 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:23.481 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:23.481 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:37:23.481 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:37:23.481 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:23.481 23:19:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:37:23.740 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:23.740 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:23.740 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:23.740 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:23.741 [2024-12-09 23:19:04.244309] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:23.741 [2024-12-09 23:19:04.244380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.741 [2024-12-09 23:19:04.244419] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:23.741 [2024-12-09 23:19:04.244432] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.741 [2024-12-09 23:19:04.247438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.741 [2024-12-09 23:19:04.247495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:23.741 [2024-12-09 23:19:04.247654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:23.741 [2024-12-09 23:19:04.247738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:23.741 [2024-12-09 23:19:04.247913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:23.741 spare 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:23.741 [2024-12-09 23:19:04.347863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:37:23.741 [2024-12-09 23:19:04.347932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:23.741 [2024-12-09 23:19:04.348301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:37:23.741 [2024-12-09 23:19:04.348536] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:37:23.741 [2024-12-09 23:19:04.348550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:37:23.741 [2024-12-09 23:19:04.348765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.741 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:24.004 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.004 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:24.004 "name": "raid_bdev1", 00:37:24.004 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:24.004 "strip_size_kb": 0, 00:37:24.004 "state": "online", 00:37:24.004 "raid_level": "raid1", 00:37:24.004 "superblock": true, 00:37:24.004 "num_base_bdevs": 2, 00:37:24.004 "num_base_bdevs_discovered": 2, 00:37:24.004 "num_base_bdevs_operational": 2, 00:37:24.004 "base_bdevs_list": [ 00:37:24.004 { 00:37:24.004 "name": "spare", 00:37:24.004 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:24.004 "is_configured": true, 00:37:24.004 "data_offset": 2048, 00:37:24.004 "data_size": 63488 00:37:24.004 }, 00:37:24.004 { 00:37:24.004 "name": "BaseBdev2", 00:37:24.004 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:24.004 "is_configured": true, 00:37:24.004 "data_offset": 2048, 00:37:24.004 "data_size": 63488 00:37:24.004 } 00:37:24.004 ] 00:37:24.004 }' 00:37:24.004 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:24.004 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:24.263 "name": "raid_bdev1", 00:37:24.263 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:24.263 "strip_size_kb": 0, 00:37:24.263 "state": "online", 00:37:24.263 "raid_level": "raid1", 00:37:24.263 "superblock": true, 00:37:24.263 "num_base_bdevs": 2, 00:37:24.263 "num_base_bdevs_discovered": 2, 00:37:24.263 "num_base_bdevs_operational": 2, 00:37:24.263 "base_bdevs_list": [ 00:37:24.263 { 00:37:24.263 "name": "spare", 00:37:24.263 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:24.263 "is_configured": true, 00:37:24.263 "data_offset": 2048, 00:37:24.263 "data_size": 63488 00:37:24.263 }, 00:37:24.263 { 00:37:24.263 "name": "BaseBdev2", 00:37:24.263 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:24.263 "is_configured": true, 00:37:24.263 "data_offset": 2048, 00:37:24.263 "data_size": 63488 00:37:24.263 } 00:37:24.263 ] 00:37:24.263 }' 00:37:24.263 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:24.523 [2024-12-09 23:19:04.991863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:24.523 23:19:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:24.523 23:19:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.523 23:19:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.523 23:19:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.523 23:19:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:24.523 23:19:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.523 23:19:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:24.523 "name": "raid_bdev1", 00:37:24.523 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:24.523 "strip_size_kb": 0, 00:37:24.523 "state": "online", 00:37:24.523 "raid_level": "raid1", 00:37:24.523 "superblock": true, 00:37:24.523 "num_base_bdevs": 2, 00:37:24.523 "num_base_bdevs_discovered": 1, 00:37:24.523 "num_base_bdevs_operational": 1, 00:37:24.523 "base_bdevs_list": [ 00:37:24.523 { 00:37:24.523 "name": null, 00:37:24.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:24.523 "is_configured": false, 00:37:24.523 "data_offset": 0, 00:37:24.523 "data_size": 63488 00:37:24.523 }, 00:37:24.523 { 00:37:24.523 "name": "BaseBdev2", 00:37:24.523 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:24.523 "is_configured": true, 00:37:24.523 "data_offset": 2048, 00:37:24.523 "data_size": 63488 00:37:24.523 } 00:37:24.523 ] 00:37:24.523 }' 00:37:24.523 23:19:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:24.523 23:19:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:25.090 23:19:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:25.090 23:19:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.090 23:19:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:25.090 [2024-12-09 23:19:05.447255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:25.090 [2024-12-09 23:19:05.447484] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:25.090 [2024-12-09 23:19:05.447506] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:25.090 [2024-12-09 23:19:05.447550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:25.090 [2024-12-09 23:19:05.464017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:37:25.090 23:19:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.090 [2024-12-09 23:19:05.466164] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:25.090 23:19:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:26.025 "name": "raid_bdev1", 00:37:26.025 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:26.025 "strip_size_kb": 0, 00:37:26.025 "state": "online", 00:37:26.025 "raid_level": "raid1", 00:37:26.025 "superblock": true, 00:37:26.025 "num_base_bdevs": 2, 00:37:26.025 "num_base_bdevs_discovered": 2, 00:37:26.025 "num_base_bdevs_operational": 2, 00:37:26.025 "process": { 00:37:26.025 "type": "rebuild", 00:37:26.025 "target": "spare", 00:37:26.025 "progress": { 00:37:26.025 "blocks": 20480, 00:37:26.025 "percent": 32 00:37:26.025 } 00:37:26.025 }, 00:37:26.025 "base_bdevs_list": [ 00:37:26.025 { 00:37:26.025 "name": "spare", 00:37:26.025 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:26.025 "is_configured": true, 00:37:26.025 "data_offset": 2048, 00:37:26.025 "data_size": 63488 00:37:26.025 }, 00:37:26.025 { 00:37:26.025 "name": "BaseBdev2", 00:37:26.025 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:26.025 "is_configured": true, 00:37:26.025 "data_offset": 2048, 00:37:26.025 "data_size": 63488 00:37:26.025 } 00:37:26.025 ] 00:37:26.025 }' 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.025 23:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:26.025 [2024-12-09 23:19:06.586555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:26.284 [2024-12-09 23:19:06.671932] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:26.284 [2024-12-09 23:19:06.672012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:26.284 [2024-12-09 23:19:06.672029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:26.284 [2024-12-09 23:19:06.672041] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:26.284 "name": "raid_bdev1", 00:37:26.284 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:26.284 "strip_size_kb": 0, 00:37:26.284 "state": "online", 00:37:26.284 "raid_level": "raid1", 00:37:26.284 "superblock": true, 00:37:26.284 "num_base_bdevs": 2, 00:37:26.284 "num_base_bdevs_discovered": 1, 00:37:26.284 "num_base_bdevs_operational": 1, 00:37:26.284 "base_bdevs_list": [ 00:37:26.284 { 00:37:26.284 "name": null, 00:37:26.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:26.284 "is_configured": false, 00:37:26.284 "data_offset": 0, 00:37:26.284 "data_size": 63488 00:37:26.284 }, 00:37:26.284 { 00:37:26.284 "name": "BaseBdev2", 00:37:26.284 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:26.284 "is_configured": true, 00:37:26.284 "data_offset": 2048, 00:37:26.284 "data_size": 63488 00:37:26.284 } 00:37:26.284 ] 00:37:26.284 }' 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:26.284 23:19:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:26.543 23:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:26.543 23:19:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.543 23:19:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:26.543 [2024-12-09 23:19:07.116346] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:26.543 [2024-12-09 23:19:07.116432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.543 [2024-12-09 23:19:07.116460] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:37:26.543 [2024-12-09 23:19:07.116474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.543 [2024-12-09 23:19:07.116965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.543 [2024-12-09 23:19:07.116992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:26.543 [2024-12-09 23:19:07.117096] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:26.543 [2024-12-09 23:19:07.117113] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:26.543 [2024-12-09 23:19:07.117124] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:26.543 [2024-12-09 23:19:07.117153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:26.543 [2024-12-09 23:19:07.133532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:37:26.543 spare 00:37:26.543 [2024-12-09 23:19:07.135689] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:26.543 23:19:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.544 23:19:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.918 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:27.918 "name": "raid_bdev1", 00:37:27.918 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:27.918 "strip_size_kb": 0, 00:37:27.918 "state": "online", 00:37:27.918 "raid_level": "raid1", 00:37:27.918 "superblock": true, 00:37:27.918 "num_base_bdevs": 2, 00:37:27.918 "num_base_bdevs_discovered": 2, 00:37:27.918 "num_base_bdevs_operational": 2, 00:37:27.918 "process": { 00:37:27.918 "type": "rebuild", 00:37:27.918 "target": "spare", 00:37:27.918 "progress": { 00:37:27.918 "blocks": 20480, 00:37:27.918 "percent": 32 00:37:27.918 } 00:37:27.918 }, 00:37:27.918 "base_bdevs_list": [ 00:37:27.918 { 00:37:27.918 "name": "spare", 00:37:27.918 "uuid": "36b17363-88c5-5f01-8e2d-ca009af1d24f", 00:37:27.918 "is_configured": true, 00:37:27.918 "data_offset": 2048, 00:37:27.918 "data_size": 63488 00:37:27.918 }, 00:37:27.918 { 00:37:27.918 "name": "BaseBdev2", 00:37:27.918 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:27.918 "is_configured": true, 00:37:27.918 "data_offset": 2048, 00:37:27.918 "data_size": 63488 00:37:27.919 } 00:37:27.919 ] 00:37:27.919 }' 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:27.919 [2024-12-09 23:19:08.275544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:27.919 [2024-12-09 23:19:08.341656] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:27.919 [2024-12-09 23:19:08.341749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:27.919 [2024-12-09 23:19:08.341771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:27.919 [2024-12-09 23:19:08.341780] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:27.919 "name": "raid_bdev1", 00:37:27.919 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:27.919 "strip_size_kb": 0, 00:37:27.919 "state": "online", 00:37:27.919 "raid_level": "raid1", 00:37:27.919 "superblock": true, 00:37:27.919 "num_base_bdevs": 2, 00:37:27.919 "num_base_bdevs_discovered": 1, 00:37:27.919 "num_base_bdevs_operational": 1, 00:37:27.919 "base_bdevs_list": [ 00:37:27.919 { 00:37:27.919 "name": null, 00:37:27.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:27.919 "is_configured": false, 00:37:27.919 "data_offset": 0, 00:37:27.919 "data_size": 63488 00:37:27.919 }, 00:37:27.919 { 00:37:27.919 "name": "BaseBdev2", 00:37:27.919 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:27.919 "is_configured": true, 00:37:27.919 "data_offset": 2048, 00:37:27.919 "data_size": 63488 00:37:27.919 } 00:37:27.919 ] 00:37:27.919 }' 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:27.919 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:28.484 "name": "raid_bdev1", 00:37:28.484 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:28.484 "strip_size_kb": 0, 00:37:28.484 "state": "online", 00:37:28.484 "raid_level": "raid1", 00:37:28.484 "superblock": true, 00:37:28.484 "num_base_bdevs": 2, 00:37:28.484 "num_base_bdevs_discovered": 1, 00:37:28.484 "num_base_bdevs_operational": 1, 00:37:28.484 "base_bdevs_list": [ 00:37:28.484 { 00:37:28.484 "name": null, 00:37:28.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.484 "is_configured": false, 00:37:28.484 "data_offset": 0, 00:37:28.484 "data_size": 63488 00:37:28.484 }, 00:37:28.484 { 00:37:28.484 "name": "BaseBdev2", 00:37:28.484 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:28.484 "is_configured": true, 00:37:28.484 "data_offset": 2048, 00:37:28.484 "data_size": 63488 00:37:28.484 } 00:37:28.484 ] 00:37:28.484 }' 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:28.484 [2024-12-09 23:19:08.985173] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:28.484 [2024-12-09 23:19:08.985243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:28.484 [2024-12-09 23:19:08.985272] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:37:28.484 [2024-12-09 23:19:08.985294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:28.484 [2024-12-09 23:19:08.985813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:28.484 [2024-12-09 23:19:08.985841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:28.484 [2024-12-09 23:19:08.985935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:28.484 [2024-12-09 23:19:08.985952] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:28.484 [2024-12-09 23:19:08.985966] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:28.484 [2024-12-09 23:19:08.985978] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:37:28.484 BaseBdev1 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.484 23:19:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:37:29.420 23:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:29.420 23:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:29.420 23:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:29.420 23:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:29.420 23:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:29.420 23:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:29.420 23:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:29.420 23:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:29.420 23:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:29.420 23:19:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:29.420 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:29.420 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.420 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.420 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.420 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.420 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:29.420 "name": "raid_bdev1", 00:37:29.420 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:29.420 "strip_size_kb": 0, 00:37:29.420 "state": "online", 00:37:29.420 "raid_level": "raid1", 00:37:29.420 "superblock": true, 00:37:29.420 "num_base_bdevs": 2, 00:37:29.420 "num_base_bdevs_discovered": 1, 00:37:29.420 "num_base_bdevs_operational": 1, 00:37:29.420 "base_bdevs_list": [ 00:37:29.420 { 00:37:29.420 "name": null, 00:37:29.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.420 "is_configured": false, 00:37:29.420 "data_offset": 0, 00:37:29.420 "data_size": 63488 00:37:29.420 }, 00:37:29.420 { 00:37:29.420 "name": "BaseBdev2", 00:37:29.420 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:29.420 "is_configured": true, 00:37:29.420 "data_offset": 2048, 00:37:29.420 "data_size": 63488 00:37:29.420 } 00:37:29.420 ] 00:37:29.420 }' 00:37:29.420 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:29.420 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:29.989 "name": "raid_bdev1", 00:37:29.989 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:29.989 "strip_size_kb": 0, 00:37:29.989 "state": "online", 00:37:29.989 "raid_level": "raid1", 00:37:29.989 "superblock": true, 00:37:29.989 "num_base_bdevs": 2, 00:37:29.989 "num_base_bdevs_discovered": 1, 00:37:29.989 "num_base_bdevs_operational": 1, 00:37:29.989 "base_bdevs_list": [ 00:37:29.989 { 00:37:29.989 "name": null, 00:37:29.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.989 "is_configured": false, 00:37:29.989 "data_offset": 0, 00:37:29.989 "data_size": 63488 00:37:29.989 }, 00:37:29.989 { 00:37:29.989 "name": "BaseBdev2", 00:37:29.989 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:29.989 "is_configured": true, 00:37:29.989 "data_offset": 2048, 00:37:29.989 "data_size": 63488 00:37:29.989 } 00:37:29.989 ] 00:37:29.989 }' 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.989 [2024-12-09 23:19:10.571075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:29.989 [2024-12-09 23:19:10.571265] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:29.989 [2024-12-09 23:19:10.571288] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:29.989 request: 00:37:29.989 { 00:37:29.989 "base_bdev": "BaseBdev1", 00:37:29.989 "raid_bdev": "raid_bdev1", 00:37:29.989 "method": "bdev_raid_add_base_bdev", 00:37:29.989 "req_id": 1 00:37:29.989 } 00:37:29.989 Got JSON-RPC error response 00:37:29.989 response: 00:37:29.989 { 00:37:29.989 "code": -22, 00:37:29.989 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:29.989 } 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:29.989 23:19:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:31.365 "name": "raid_bdev1", 00:37:31.365 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:31.365 "strip_size_kb": 0, 00:37:31.365 "state": "online", 00:37:31.365 "raid_level": "raid1", 00:37:31.365 "superblock": true, 00:37:31.365 "num_base_bdevs": 2, 00:37:31.365 "num_base_bdevs_discovered": 1, 00:37:31.365 "num_base_bdevs_operational": 1, 00:37:31.365 "base_bdevs_list": [ 00:37:31.365 { 00:37:31.365 "name": null, 00:37:31.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:31.365 "is_configured": false, 00:37:31.365 "data_offset": 0, 00:37:31.365 "data_size": 63488 00:37:31.365 }, 00:37:31.365 { 00:37:31.365 "name": "BaseBdev2", 00:37:31.365 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:31.365 "is_configured": true, 00:37:31.365 "data_offset": 2048, 00:37:31.365 "data_size": 63488 00:37:31.365 } 00:37:31.365 ] 00:37:31.365 }' 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:31.365 23:19:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.623 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:31.623 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:31.623 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:31.623 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:31.623 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:31.623 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.623 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.623 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:31.624 "name": "raid_bdev1", 00:37:31.624 "uuid": "ea95c463-52af-4fda-9c48-3ed688cc2816", 00:37:31.624 "strip_size_kb": 0, 00:37:31.624 "state": "online", 00:37:31.624 "raid_level": "raid1", 00:37:31.624 "superblock": true, 00:37:31.624 "num_base_bdevs": 2, 00:37:31.624 "num_base_bdevs_discovered": 1, 00:37:31.624 "num_base_bdevs_operational": 1, 00:37:31.624 "base_bdevs_list": [ 00:37:31.624 { 00:37:31.624 "name": null, 00:37:31.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:31.624 "is_configured": false, 00:37:31.624 "data_offset": 0, 00:37:31.624 "data_size": 63488 00:37:31.624 }, 00:37:31.624 { 00:37:31.624 "name": "BaseBdev2", 00:37:31.624 "uuid": "f5f87794-48c5-5fa5-ab1b-2613d4a4d3cf", 00:37:31.624 "is_configured": true, 00:37:31.624 "data_offset": 2048, 00:37:31.624 "data_size": 63488 00:37:31.624 } 00:37:31.624 ] 00:37:31.624 }' 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75600 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75600 ']' 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75600 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75600 00:37:31.624 killing process with pid 75600 00:37:31.624 Received shutdown signal, test time was about 60.000000 seconds 00:37:31.624 00:37:31.624 Latency(us) 00:37:31.624 [2024-12-09T23:19:12.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.624 [2024-12-09T23:19:12.260Z] =================================================================================================================== 00:37:31.624 [2024-12-09T23:19:12.260Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75600' 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75600 00:37:31.624 [2024-12-09 23:19:12.214222] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:31.624 23:19:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75600 00:37:31.624 [2024-12-09 23:19:12.214371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:31.624 [2024-12-09 23:19:12.214440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:31.624 [2024-12-09 23:19:12.214457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:37:32.190 [2024-12-09 23:19:12.523460] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:37:33.124 00:37:33.124 real 0m23.320s 00:37:33.124 user 0m28.380s 00:37:33.124 sys 0m4.049s 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:33.124 ************************************ 00:37:33.124 END TEST raid_rebuild_test_sb 00:37:33.124 ************************************ 00:37:33.124 23:19:13 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:37:33.124 23:19:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:37:33.124 23:19:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.124 23:19:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:33.124 ************************************ 00:37:33.124 START TEST raid_rebuild_test_io 00:37:33.124 ************************************ 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:33.124 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:33.125 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:37:33.125 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:33.125 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:33.125 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:33.125 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:37:33.125 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:37:33.125 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:37:33.125 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76330 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76330 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76330 ']' 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:33.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:33.384 23:19:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:33.384 [2024-12-09 23:19:13.856581] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:37:33.384 [2024-12-09 23:19:13.856735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76330 ] 00:37:33.384 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:33.384 Zero copy mechanism will not be used. 00:37:33.642 [2024-12-09 23:19:14.038648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.642 [2024-12-09 23:19:14.155835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.900 [2024-12-09 23:19:14.361068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:33.900 [2024-12-09 23:19:14.361138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:34.164 BaseBdev1_malloc 00:37:34.164 BaseBdev1 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.164 [2024-12-09 23:19:14.760522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:34.164 [2024-12-09 23:19:14.760584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:34.164 [2024-12-09 23:19:14.760607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:34.164 [2024-12-09 23:19:14.760622] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:34.164 [2024-12-09 23:19:14.763009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:34.164 [2024-12-09 23:19:14.763051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.164 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 BaseBdev2_malloc 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 [2024-12-09 23:19:14.816056] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:34.422 [2024-12-09 23:19:14.816118] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:34.422 [2024-12-09 23:19:14.816139] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:34.422 [2024-12-09 23:19:14.816157] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:34.422 [2024-12-09 23:19:14.818549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:34.422 [2024-12-09 23:19:14.818586] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:34.422 BaseBdev2 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 spare_malloc 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 spare_delay 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 [2024-12-09 23:19:14.898808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:34.422 [2024-12-09 23:19:14.898868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:34.422 [2024-12-09 23:19:14.898890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:37:34.422 [2024-12-09 23:19:14.898905] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:34.422 [2024-12-09 23:19:14.901302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:34.422 [2024-12-09 23:19:14.901343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:34.422 spare 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 [2024-12-09 23:19:14.910864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:34.422 [2024-12-09 23:19:14.913001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:34.422 [2024-12-09 23:19:14.913100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:34.422 [2024-12-09 23:19:14.913117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:37:34.422 [2024-12-09 23:19:14.913380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:34.422 [2024-12-09 23:19:14.913564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:34.422 [2024-12-09 23:19:14.913587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:34.422 [2024-12-09 23:19:14.913736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:34.422 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.423 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.423 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.423 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:34.423 "name": "raid_bdev1", 00:37:34.423 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:34.423 "strip_size_kb": 0, 00:37:34.423 "state": "online", 00:37:34.423 "raid_level": "raid1", 00:37:34.423 "superblock": false, 00:37:34.423 "num_base_bdevs": 2, 00:37:34.423 "num_base_bdevs_discovered": 2, 00:37:34.423 "num_base_bdevs_operational": 2, 00:37:34.423 "base_bdevs_list": [ 00:37:34.423 { 00:37:34.423 "name": "BaseBdev1", 00:37:34.423 "uuid": "86a3076c-86a7-52d8-87d0-228a2fca4501", 00:37:34.423 "is_configured": true, 00:37:34.423 "data_offset": 0, 00:37:34.423 "data_size": 65536 00:37:34.423 }, 00:37:34.423 { 00:37:34.423 "name": "BaseBdev2", 00:37:34.423 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:34.423 "is_configured": true, 00:37:34.423 "data_offset": 0, 00:37:34.423 "data_size": 65536 00:37:34.423 } 00:37:34.423 ] 00:37:34.423 }' 00:37:34.423 23:19:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:34.423 23:19:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.997 [2024-12-09 23:19:15.358789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.997 [2024-12-09 23:19:15.442465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:34.997 "name": "raid_bdev1", 00:37:34.997 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:34.997 "strip_size_kb": 0, 00:37:34.997 "state": "online", 00:37:34.997 "raid_level": "raid1", 00:37:34.997 "superblock": false, 00:37:34.997 "num_base_bdevs": 2, 00:37:34.997 "num_base_bdevs_discovered": 1, 00:37:34.997 "num_base_bdevs_operational": 1, 00:37:34.997 "base_bdevs_list": [ 00:37:34.997 { 00:37:34.997 "name": null, 00:37:34.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:34.997 "is_configured": false, 00:37:34.997 "data_offset": 0, 00:37:34.997 "data_size": 65536 00:37:34.997 }, 00:37:34.997 { 00:37:34.997 "name": "BaseBdev2", 00:37:34.997 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:34.997 "is_configured": true, 00:37:34.997 "data_offset": 0, 00:37:34.997 "data_size": 65536 00:37:34.997 } 00:37:34.997 ] 00:37:34.997 }' 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:34.997 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:34.997 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:34.997 Zero copy mechanism will not be used. 00:37:34.997 Running I/O for 60 seconds... 00:37:34.997 [2024-12-09 23:19:15.542368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:37:35.255 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:35.255 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.255 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:35.255 [2024-12-09 23:19:15.851201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:35.514 23:19:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.514 23:19:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:35.514 [2024-12-09 23:19:15.908582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:37:35.514 [2024-12-09 23:19:15.910745] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:35.514 [2024-12-09 23:19:16.041418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:35.772 [2024-12-09 23:19:16.162569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:35.772 [2024-12-09 23:19:16.162904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:36.030 [2024-12-09 23:19:16.504859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:37:36.289 209.00 IOPS, 627.00 MiB/s [2024-12-09T23:19:16.925Z] [2024-12-09 23:19:16.719802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:37:36.289 [2024-12-09 23:19:16.720137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:37:36.289 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:36.289 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:36.289 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:36.289 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:36.289 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:36.289 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.289 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.289 23:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.289 23:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:36.564 23:19:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.564 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:36.564 "name": "raid_bdev1", 00:37:36.564 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:36.564 "strip_size_kb": 0, 00:37:36.564 "state": "online", 00:37:36.564 "raid_level": "raid1", 00:37:36.564 "superblock": false, 00:37:36.564 "num_base_bdevs": 2, 00:37:36.564 "num_base_bdevs_discovered": 2, 00:37:36.564 "num_base_bdevs_operational": 2, 00:37:36.564 "process": { 00:37:36.565 "type": "rebuild", 00:37:36.565 "target": "spare", 00:37:36.565 "progress": { 00:37:36.565 "blocks": 10240, 00:37:36.565 "percent": 15 00:37:36.565 } 00:37:36.565 }, 00:37:36.565 "base_bdevs_list": [ 00:37:36.565 { 00:37:36.565 "name": "spare", 00:37:36.565 "uuid": "1762b5fc-cdea-5524-b3e1-2fd4f579574e", 00:37:36.565 "is_configured": true, 00:37:36.565 "data_offset": 0, 00:37:36.565 "data_size": 65536 00:37:36.565 }, 00:37:36.565 { 00:37:36.565 "name": "BaseBdev2", 00:37:36.565 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:36.565 "is_configured": true, 00:37:36.565 "data_offset": 0, 00:37:36.565 "data_size": 65536 00:37:36.565 } 00:37:36.565 ] 00:37:36.565 }' 00:37:36.565 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:36.565 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:36.565 23:19:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:36.565 [2024-12-09 23:19:17.049301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:36.565 [2024-12-09 23:19:17.074226] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:36.565 [2024-12-09 23:19:17.076641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:36.565 [2024-12-09 23:19:17.076680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:36.565 [2024-12-09 23:19:17.076697] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:36.565 [2024-12-09 23:19:17.120701] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:36.565 "name": "raid_bdev1", 00:37:36.565 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:36.565 "strip_size_kb": 0, 00:37:36.565 "state": "online", 00:37:36.565 "raid_level": "raid1", 00:37:36.565 "superblock": false, 00:37:36.565 "num_base_bdevs": 2, 00:37:36.565 "num_base_bdevs_discovered": 1, 00:37:36.565 "num_base_bdevs_operational": 1, 00:37:36.565 "base_bdevs_list": [ 00:37:36.565 { 00:37:36.565 "name": null, 00:37:36.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:36.565 "is_configured": false, 00:37:36.565 "data_offset": 0, 00:37:36.565 "data_size": 65536 00:37:36.565 }, 00:37:36.565 { 00:37:36.565 "name": "BaseBdev2", 00:37:36.565 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:36.565 "is_configured": true, 00:37:36.565 "data_offset": 0, 00:37:36.565 "data_size": 65536 00:37:36.565 } 00:37:36.565 ] 00:37:36.565 }' 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:36.565 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:37.153 178.50 IOPS, 535.50 MiB/s [2024-12-09T23:19:17.789Z] 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:37.153 "name": "raid_bdev1", 00:37:37.153 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:37.153 "strip_size_kb": 0, 00:37:37.153 "state": "online", 00:37:37.153 "raid_level": "raid1", 00:37:37.153 "superblock": false, 00:37:37.153 "num_base_bdevs": 2, 00:37:37.153 "num_base_bdevs_discovered": 1, 00:37:37.153 "num_base_bdevs_operational": 1, 00:37:37.153 "base_bdevs_list": [ 00:37:37.153 { 00:37:37.153 "name": null, 00:37:37.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:37.153 "is_configured": false, 00:37:37.153 "data_offset": 0, 00:37:37.153 "data_size": 65536 00:37:37.153 }, 00:37:37.153 { 00:37:37.153 "name": "BaseBdev2", 00:37:37.153 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:37.153 "is_configured": true, 00:37:37.153 "data_offset": 0, 00:37:37.153 "data_size": 65536 00:37:37.153 } 00:37:37.153 ] 00:37:37.153 }' 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:37.153 [2024-12-09 23:19:17.687473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.153 23:19:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:37:37.153 [2024-12-09 23:19:17.756227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:37:37.153 [2024-12-09 23:19:17.758533] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:37.413 [2024-12-09 23:19:17.879005] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:37.413 [2024-12-09 23:19:17.879711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:37.672 [2024-12-09 23:19:18.090760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:37.672 [2024-12-09 23:19:18.091307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:37.933 [2024-12-09 23:19:18.433631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:37:37.933 [2024-12-09 23:19:18.434408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:37:38.198 172.00 IOPS, 516.00 MiB/s [2024-12-09T23:19:18.834Z] 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:38.198 "name": "raid_bdev1", 00:37:38.198 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:38.198 "strip_size_kb": 0, 00:37:38.198 "state": "online", 00:37:38.198 "raid_level": "raid1", 00:37:38.198 "superblock": false, 00:37:38.198 "num_base_bdevs": 2, 00:37:38.198 "num_base_bdevs_discovered": 2, 00:37:38.198 "num_base_bdevs_operational": 2, 00:37:38.198 "process": { 00:37:38.198 "type": "rebuild", 00:37:38.198 "target": "spare", 00:37:38.198 "progress": { 00:37:38.198 "blocks": 10240, 00:37:38.198 "percent": 15 00:37:38.198 } 00:37:38.198 }, 00:37:38.198 "base_bdevs_list": [ 00:37:38.198 { 00:37:38.198 "name": "spare", 00:37:38.198 "uuid": "1762b5fc-cdea-5524-b3e1-2fd4f579574e", 00:37:38.198 "is_configured": true, 00:37:38.198 "data_offset": 0, 00:37:38.198 "data_size": 65536 00:37:38.198 }, 00:37:38.198 { 00:37:38.198 "name": "BaseBdev2", 00:37:38.198 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:38.198 "is_configured": true, 00:37:38.198 "data_offset": 0, 00:37:38.198 "data_size": 65536 00:37:38.198 } 00:37:38.198 ] 00:37:38.198 }' 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:38.198 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=408 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.457 23:19:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.458 [2024-12-09 23:19:18.917996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:37:38.458 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:38.458 "name": "raid_bdev1", 00:37:38.458 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:38.458 "strip_size_kb": 0, 00:37:38.458 "state": "online", 00:37:38.458 "raid_level": "raid1", 00:37:38.458 "superblock": false, 00:37:38.458 "num_base_bdevs": 2, 00:37:38.458 "num_base_bdevs_discovered": 2, 00:37:38.458 "num_base_bdevs_operational": 2, 00:37:38.458 "process": { 00:37:38.458 "type": "rebuild", 00:37:38.458 "target": "spare", 00:37:38.458 "progress": { 00:37:38.458 "blocks": 12288, 00:37:38.458 "percent": 18 00:37:38.458 } 00:37:38.458 }, 00:37:38.458 "base_bdevs_list": [ 00:37:38.458 { 00:37:38.458 "name": "spare", 00:37:38.458 "uuid": "1762b5fc-cdea-5524-b3e1-2fd4f579574e", 00:37:38.458 "is_configured": true, 00:37:38.458 "data_offset": 0, 00:37:38.458 "data_size": 65536 00:37:38.458 }, 00:37:38.458 { 00:37:38.458 "name": "BaseBdev2", 00:37:38.458 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:38.458 "is_configured": true, 00:37:38.458 "data_offset": 0, 00:37:38.458 "data_size": 65536 00:37:38.458 } 00:37:38.458 ] 00:37:38.458 }' 00:37:38.458 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:38.458 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:38.458 23:19:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:38.458 23:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:38.458 23:19:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:38.715 [2024-12-09 23:19:19.127278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:37:38.715 [2024-12-09 23:19:19.127653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:37:39.231 149.50 IOPS, 448.50 MiB/s [2024-12-09T23:19:19.867Z] [2024-12-09 23:19:19.835167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:37:39.488 [2024-12-09 23:19:19.950616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:37:39.488 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:39.488 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:39.488 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:39.488 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:39.488 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:39.488 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:39.488 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:39.488 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:39.488 23:19:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.489 23:19:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:39.489 23:19:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.489 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:39.489 "name": "raid_bdev1", 00:37:39.489 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:39.489 "strip_size_kb": 0, 00:37:39.489 "state": "online", 00:37:39.489 "raid_level": "raid1", 00:37:39.489 "superblock": false, 00:37:39.489 "num_base_bdevs": 2, 00:37:39.489 "num_base_bdevs_discovered": 2, 00:37:39.489 "num_base_bdevs_operational": 2, 00:37:39.489 "process": { 00:37:39.489 "type": "rebuild", 00:37:39.489 "target": "spare", 00:37:39.489 "progress": { 00:37:39.489 "blocks": 28672, 00:37:39.489 "percent": 43 00:37:39.489 } 00:37:39.489 }, 00:37:39.489 "base_bdevs_list": [ 00:37:39.489 { 00:37:39.489 "name": "spare", 00:37:39.489 "uuid": "1762b5fc-cdea-5524-b3e1-2fd4f579574e", 00:37:39.489 "is_configured": true, 00:37:39.489 "data_offset": 0, 00:37:39.489 "data_size": 65536 00:37:39.489 }, 00:37:39.489 { 00:37:39.489 "name": "BaseBdev2", 00:37:39.489 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:39.489 "is_configured": true, 00:37:39.489 "data_offset": 0, 00:37:39.489 "data_size": 65536 00:37:39.489 } 00:37:39.489 ] 00:37:39.489 }' 00:37:39.489 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:39.489 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:39.489 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:39.747 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:39.747 23:19:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:39.747 [2024-12-09 23:19:20.280420] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:37:39.747 [2024-12-09 23:19:20.280981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:37:40.006 [2024-12-09 23:19:20.403960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:37:40.006 131.20 IOPS, 393.60 MiB/s [2024-12-09T23:19:20.642Z] [2024-12-09 23:19:20.638162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:37:40.269 [2024-12-09 23:19:20.771216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:37:40.608 [2024-12-09 23:19:21.121034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:40.608 "name": "raid_bdev1", 00:37:40.608 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:40.608 "strip_size_kb": 0, 00:37:40.608 "state": "online", 00:37:40.608 "raid_level": "raid1", 00:37:40.608 "superblock": false, 00:37:40.608 "num_base_bdevs": 2, 00:37:40.608 "num_base_bdevs_discovered": 2, 00:37:40.608 "num_base_bdevs_operational": 2, 00:37:40.608 "process": { 00:37:40.608 "type": "rebuild", 00:37:40.608 "target": "spare", 00:37:40.608 "progress": { 00:37:40.608 "blocks": 45056, 00:37:40.608 "percent": 68 00:37:40.608 } 00:37:40.608 }, 00:37:40.608 "base_bdevs_list": [ 00:37:40.608 { 00:37:40.608 "name": "spare", 00:37:40.608 "uuid": "1762b5fc-cdea-5524-b3e1-2fd4f579574e", 00:37:40.608 "is_configured": true, 00:37:40.608 "data_offset": 0, 00:37:40.608 "data_size": 65536 00:37:40.608 }, 00:37:40.608 { 00:37:40.608 "name": "BaseBdev2", 00:37:40.608 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:40.608 "is_configured": true, 00:37:40.608 "data_offset": 0, 00:37:40.608 "data_size": 65536 00:37:40.608 } 00:37:40.608 ] 00:37:40.608 }' 00:37:40.608 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:40.870 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:40.870 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:40.870 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:40.870 23:19:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:40.871 [2024-12-09 23:19:21.329682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:37:41.388 116.33 IOPS, 349.00 MiB/s [2024-12-09T23:19:22.024Z] [2024-12-09 23:19:21.780603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:37:41.647 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:41.647 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:41.647 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:41.647 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:41.647 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:41.647 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:41.647 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:41.906 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:41.906 23:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.906 23:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:41.906 23:19:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.906 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:41.906 "name": "raid_bdev1", 00:37:41.906 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:41.906 "strip_size_kb": 0, 00:37:41.906 "state": "online", 00:37:41.906 "raid_level": "raid1", 00:37:41.906 "superblock": false, 00:37:41.906 "num_base_bdevs": 2, 00:37:41.906 "num_base_bdevs_discovered": 2, 00:37:41.906 "num_base_bdevs_operational": 2, 00:37:41.906 "process": { 00:37:41.906 "type": "rebuild", 00:37:41.906 "target": "spare", 00:37:41.906 "progress": { 00:37:41.906 "blocks": 59392, 00:37:41.906 "percent": 90 00:37:41.906 } 00:37:41.906 }, 00:37:41.906 "base_bdevs_list": [ 00:37:41.906 { 00:37:41.906 "name": "spare", 00:37:41.906 "uuid": "1762b5fc-cdea-5524-b3e1-2fd4f579574e", 00:37:41.906 "is_configured": true, 00:37:41.906 "data_offset": 0, 00:37:41.906 "data_size": 65536 00:37:41.906 }, 00:37:41.906 { 00:37:41.906 "name": "BaseBdev2", 00:37:41.906 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:41.906 "is_configured": true, 00:37:41.906 "data_offset": 0, 00:37:41.906 "data_size": 65536 00:37:41.906 } 00:37:41.906 ] 00:37:41.906 }' 00:37:41.906 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:41.906 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:41.906 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:41.906 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:41.906 23:19:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:41.906 [2024-12-09 23:19:22.541369] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:42.165 105.43 IOPS, 316.29 MiB/s [2024-12-09T23:19:22.801Z] [2024-12-09 23:19:22.641280] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:42.165 [2024-12-09 23:19:22.643781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:43.099 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:43.099 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:43.099 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:43.099 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:43.099 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:43.099 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:43.099 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:43.100 "name": "raid_bdev1", 00:37:43.100 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:43.100 "strip_size_kb": 0, 00:37:43.100 "state": "online", 00:37:43.100 "raid_level": "raid1", 00:37:43.100 "superblock": false, 00:37:43.100 "num_base_bdevs": 2, 00:37:43.100 "num_base_bdevs_discovered": 2, 00:37:43.100 "num_base_bdevs_operational": 2, 00:37:43.100 "base_bdevs_list": [ 00:37:43.100 { 00:37:43.100 "name": "spare", 00:37:43.100 "uuid": "1762b5fc-cdea-5524-b3e1-2fd4f579574e", 00:37:43.100 "is_configured": true, 00:37:43.100 "data_offset": 0, 00:37:43.100 "data_size": 65536 00:37:43.100 }, 00:37:43.100 { 00:37:43.100 "name": "BaseBdev2", 00:37:43.100 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:43.100 "is_configured": true, 00:37:43.100 "data_offset": 0, 00:37:43.100 "data_size": 65536 00:37:43.100 } 00:37:43.100 ] 00:37:43.100 }' 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:43.100 97.00 IOPS, 291.00 MiB/s [2024-12-09T23:19:23.736Z] 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:43.100 "name": "raid_bdev1", 00:37:43.100 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:43.100 "strip_size_kb": 0, 00:37:43.100 "state": "online", 00:37:43.100 "raid_level": "raid1", 00:37:43.100 "superblock": false, 00:37:43.100 "num_base_bdevs": 2, 00:37:43.100 "num_base_bdevs_discovered": 2, 00:37:43.100 "num_base_bdevs_operational": 2, 00:37:43.100 "base_bdevs_list": [ 00:37:43.100 { 00:37:43.100 "name": "spare", 00:37:43.100 "uuid": "1762b5fc-cdea-5524-b3e1-2fd4f579574e", 00:37:43.100 "is_configured": true, 00:37:43.100 "data_offset": 0, 00:37:43.100 "data_size": 65536 00:37:43.100 }, 00:37:43.100 { 00:37:43.100 "name": "BaseBdev2", 00:37:43.100 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:43.100 "is_configured": true, 00:37:43.100 "data_offset": 0, 00:37:43.100 "data_size": 65536 00:37:43.100 } 00:37:43.100 ] 00:37:43.100 }' 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:43.100 23:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.359 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:43.359 "name": "raid_bdev1", 00:37:43.359 "uuid": "eecd4279-3e4e-437b-8fb5-86a9ebc4ccb0", 00:37:43.359 "strip_size_kb": 0, 00:37:43.359 "state": "online", 00:37:43.359 "raid_level": "raid1", 00:37:43.359 "superblock": false, 00:37:43.359 "num_base_bdevs": 2, 00:37:43.359 "num_base_bdevs_discovered": 2, 00:37:43.359 "num_base_bdevs_operational": 2, 00:37:43.359 "base_bdevs_list": [ 00:37:43.359 { 00:37:43.359 "name": "spare", 00:37:43.359 "uuid": "1762b5fc-cdea-5524-b3e1-2fd4f579574e", 00:37:43.359 "is_configured": true, 00:37:43.359 "data_offset": 0, 00:37:43.359 "data_size": 65536 00:37:43.359 }, 00:37:43.359 { 00:37:43.359 "name": "BaseBdev2", 00:37:43.359 "uuid": "55b10c1c-0b33-50c4-94cf-2a9be76b810f", 00:37:43.359 "is_configured": true, 00:37:43.359 "data_offset": 0, 00:37:43.359 "data_size": 65536 00:37:43.359 } 00:37:43.359 ] 00:37:43.359 }' 00:37:43.359 23:19:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:43.359 23:19:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:43.618 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:43.618 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.618 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:43.618 [2024-12-09 23:19:24.134527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:43.618 [2024-12-09 23:19:24.134686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:43.618 00:37:43.618 Latency(us) 00:37:43.618 [2024-12-09T23:19:24.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.618 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:37:43.618 raid_bdev1 : 8.65 92.05 276.14 0.00 0.00 15156.29 307.61 114543.24 00:37:43.618 [2024-12-09T23:19:24.254Z] =================================================================================================================== 00:37:43.618 [2024-12-09T23:19:24.254Z] Total : 92.05 276.14 0.00 0.00 15156.29 307.61 114543.24 00:37:43.618 { 00:37:43.618 "results": [ 00:37:43.618 { 00:37:43.618 "job": "raid_bdev1", 00:37:43.618 "core_mask": "0x1", 00:37:43.618 "workload": "randrw", 00:37:43.618 "percentage": 50, 00:37:43.618 "status": "finished", 00:37:43.618 "queue_depth": 2, 00:37:43.618 "io_size": 3145728, 00:37:43.618 "runtime": 8.647677, 00:37:43.618 "iops": 92.04784128731913, 00:37:43.618 "mibps": 276.1435238619574, 00:37:43.618 "io_failed": 0, 00:37:43.618 "io_timeout": 0, 00:37:43.618 "avg_latency_us": 15156.29320901697, 00:37:43.618 "min_latency_us": 307.61124497991966, 00:37:43.618 "max_latency_us": 114543.24176706828 00:37:43.618 } 00:37:43.618 ], 00:37:43.618 "core_count": 1 00:37:43.618 } 00:37:43.618 [2024-12-09 23:19:24.200711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:43.618 [2024-12-09 23:19:24.200801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:43.618 [2024-12-09 23:19:24.200882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:43.618 [2024-12-09 23:19:24.200898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:43.618 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.618 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:37:43.618 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.618 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.618 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:43.618 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:37:43.878 /dev/nbd0 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:43.878 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:44.137 1+0 records in 00:37:44.137 1+0 records out 00:37:44.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281955 s, 14.5 MB/s 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:44.137 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:37:44.137 /dev/nbd1 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:44.396 1+0 records in 00:37:44.396 1+0 records out 00:37:44.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487505 s, 8.4 MB/s 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:44.396 23:19:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:44.654 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:44.912 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:44.912 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:44.912 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:44.912 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:44.912 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76330 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76330 ']' 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76330 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76330 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:44.913 killing process with pid 76330 00:37:44.913 Received shutdown signal, test time was about 9.978347 seconds 00:37:44.913 00:37:44.913 Latency(us) 00:37:44.913 [2024-12-09T23:19:25.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.913 [2024-12-09T23:19:25.549Z] =================================================================================================================== 00:37:44.913 [2024-12-09T23:19:25.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76330' 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76330 00:37:44.913 [2024-12-09 23:19:25.506871] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:44.913 23:19:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76330 00:37:45.172 [2024-12-09 23:19:25.745016] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:46.549 ************************************ 00:37:46.549 END TEST raid_rebuild_test_io 00:37:46.549 ************************************ 00:37:46.549 23:19:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:37:46.549 00:37:46.549 real 0m13.215s 00:37:46.549 user 0m16.352s 00:37:46.549 sys 0m1.701s 00:37:46.549 23:19:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:46.549 23:19:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:37:46.549 23:19:27 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:37:46.549 23:19:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:37:46.549 23:19:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:46.549 23:19:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:46.549 ************************************ 00:37:46.549 START TEST raid_rebuild_test_sb_io 00:37:46.549 ************************************ 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76719 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76719 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76719 ']' 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:46.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:46.549 23:19:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:46.549 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:46.549 Zero copy mechanism will not be used. 00:37:46.549 [2024-12-09 23:19:27.153842] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:37:46.549 [2024-12-09 23:19:27.153967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76719 ] 00:37:46.808 [2024-12-09 23:19:27.333046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.067 [2024-12-09 23:19:27.450600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.067 [2024-12-09 23:19:27.653598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:47.067 [2024-12-09 23:19:27.653652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:47.642 BaseBdev1_malloc 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:47.642 [2024-12-09 23:19:28.068174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:47.642 [2024-12-09 23:19:28.068246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:47.642 [2024-12-09 23:19:28.068271] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:47.642 [2024-12-09 23:19:28.068286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:47.642 [2024-12-09 23:19:28.070719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:47.642 [2024-12-09 23:19:28.070766] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:47.642 BaseBdev1 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:47.642 BaseBdev2_malloc 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:47.642 [2024-12-09 23:19:28.126231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:47.642 [2024-12-09 23:19:28.126502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:47.642 [2024-12-09 23:19:28.126538] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:47.642 [2024-12-09 23:19:28.126554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:47.642 [2024-12-09 23:19:28.129056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:47.642 [2024-12-09 23:19:28.129100] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:47.642 BaseBdev2 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:47.642 spare_malloc 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:47.642 spare_delay 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:47.642 [2024-12-09 23:19:28.206533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:47.642 [2024-12-09 23:19:28.206599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:47.642 [2024-12-09 23:19:28.206624] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:37:47.642 [2024-12-09 23:19:28.206639] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:47.642 [2024-12-09 23:19:28.209076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:47.642 [2024-12-09 23:19:28.209123] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:47.642 spare 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:47.642 [2024-12-09 23:19:28.218604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:47.642 [2024-12-09 23:19:28.220710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:47.642 [2024-12-09 23:19:28.220880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:47.642 [2024-12-09 23:19:28.220897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:47.642 [2024-12-09 23:19:28.221158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:47.642 [2024-12-09 23:19:28.221310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:47.642 [2024-12-09 23:19:28.221320] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:47.642 [2024-12-09 23:19:28.221506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.642 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:47.642 "name": "raid_bdev1", 00:37:47.642 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:47.642 "strip_size_kb": 0, 00:37:47.642 "state": "online", 00:37:47.643 "raid_level": "raid1", 00:37:47.643 "superblock": true, 00:37:47.643 "num_base_bdevs": 2, 00:37:47.643 "num_base_bdevs_discovered": 2, 00:37:47.643 "num_base_bdevs_operational": 2, 00:37:47.643 "base_bdevs_list": [ 00:37:47.643 { 00:37:47.643 "name": "BaseBdev1", 00:37:47.643 "uuid": "96bf503c-0691-5dd2-b8e0-03aef51a9b1f", 00:37:47.643 "is_configured": true, 00:37:47.643 "data_offset": 2048, 00:37:47.643 "data_size": 63488 00:37:47.643 }, 00:37:47.643 { 00:37:47.643 "name": "BaseBdev2", 00:37:47.643 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:47.643 "is_configured": true, 00:37:47.643 "data_offset": 2048, 00:37:47.643 "data_size": 63488 00:37:47.643 } 00:37:47.643 ] 00:37:47.643 }' 00:37:47.643 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:47.643 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:48.210 [2024-12-09 23:19:28.658781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:48.210 [2024-12-09 23:19:28.742502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.210 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:48.211 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.211 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:48.211 "name": "raid_bdev1", 00:37:48.211 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:48.211 "strip_size_kb": 0, 00:37:48.211 "state": "online", 00:37:48.211 "raid_level": "raid1", 00:37:48.211 "superblock": true, 00:37:48.211 "num_base_bdevs": 2, 00:37:48.211 "num_base_bdevs_discovered": 1, 00:37:48.211 "num_base_bdevs_operational": 1, 00:37:48.211 "base_bdevs_list": [ 00:37:48.211 { 00:37:48.211 "name": null, 00:37:48.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:48.211 "is_configured": false, 00:37:48.211 "data_offset": 0, 00:37:48.211 "data_size": 63488 00:37:48.211 }, 00:37:48.211 { 00:37:48.211 "name": "BaseBdev2", 00:37:48.211 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:48.211 "is_configured": true, 00:37:48.211 "data_offset": 2048, 00:37:48.211 "data_size": 63488 00:37:48.211 } 00:37:48.211 ] 00:37:48.211 }' 00:37:48.211 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:48.211 23:19:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:48.211 [2024-12-09 23:19:28.842425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:37:48.211 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:48.211 Zero copy mechanism will not be used. 00:37:48.211 Running I/O for 60 seconds... 00:37:48.778 23:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:48.778 23:19:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.778 23:19:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:48.778 [2024-12-09 23:19:29.206515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:48.778 23:19:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.778 23:19:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:48.778 [2024-12-09 23:19:29.265805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:37:48.778 [2024-12-09 23:19:29.268113] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:48.778 [2024-12-09 23:19:29.377058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:48.778 [2024-12-09 23:19:29.377879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:49.037 [2024-12-09 23:19:29.598254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:49.037 [2024-12-09 23:19:29.598864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:49.555 203.00 IOPS, 609.00 MiB/s [2024-12-09T23:19:30.191Z] [2024-12-09 23:19:29.965876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:37:49.555 [2024-12-09 23:19:29.966749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:37:49.555 [2024-12-09 23:19:30.188808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:49.813 "name": "raid_bdev1", 00:37:49.813 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:49.813 "strip_size_kb": 0, 00:37:49.813 "state": "online", 00:37:49.813 "raid_level": "raid1", 00:37:49.813 "superblock": true, 00:37:49.813 "num_base_bdevs": 2, 00:37:49.813 "num_base_bdevs_discovered": 2, 00:37:49.813 "num_base_bdevs_operational": 2, 00:37:49.813 "process": { 00:37:49.813 "type": "rebuild", 00:37:49.813 "target": "spare", 00:37:49.813 "progress": { 00:37:49.813 "blocks": 10240, 00:37:49.813 "percent": 16 00:37:49.813 } 00:37:49.813 }, 00:37:49.813 "base_bdevs_list": [ 00:37:49.813 { 00:37:49.813 "name": "spare", 00:37:49.813 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:49.813 "is_configured": true, 00:37:49.813 "data_offset": 2048, 00:37:49.813 "data_size": 63488 00:37:49.813 }, 00:37:49.813 { 00:37:49.813 "name": "BaseBdev2", 00:37:49.813 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:49.813 "is_configured": true, 00:37:49.813 "data_offset": 2048, 00:37:49.813 "data_size": 63488 00:37:49.813 } 00:37:49.813 ] 00:37:49.813 }' 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.813 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:49.814 [2024-12-09 23:19:30.397974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:50.073 [2024-12-09 23:19:30.506673] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:50.073 [2024-12-09 23:19:30.509236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:50.073 [2024-12-09 23:19:30.509289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:50.073 [2024-12-09 23:19:30.509304] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:50.073 [2024-12-09 23:19:30.552058] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:50.073 "name": "raid_bdev1", 00:37:50.073 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:50.073 "strip_size_kb": 0, 00:37:50.073 "state": "online", 00:37:50.073 "raid_level": "raid1", 00:37:50.073 "superblock": true, 00:37:50.073 "num_base_bdevs": 2, 00:37:50.073 "num_base_bdevs_discovered": 1, 00:37:50.073 "num_base_bdevs_operational": 1, 00:37:50.073 "base_bdevs_list": [ 00:37:50.073 { 00:37:50.073 "name": null, 00:37:50.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:50.073 "is_configured": false, 00:37:50.073 "data_offset": 0, 00:37:50.073 "data_size": 63488 00:37:50.073 }, 00:37:50.073 { 00:37:50.073 "name": "BaseBdev2", 00:37:50.073 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:50.073 "is_configured": true, 00:37:50.073 "data_offset": 2048, 00:37:50.073 "data_size": 63488 00:37:50.073 } 00:37:50.073 ] 00:37:50.073 }' 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:50.073 23:19:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:50.606 167.50 IOPS, 502.50 MiB/s [2024-12-09T23:19:31.242Z] 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:50.606 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:50.606 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:50.606 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:50.606 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:50.606 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:50.606 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:50.606 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.606 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:50.606 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.606 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:50.606 "name": "raid_bdev1", 00:37:50.606 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:50.606 "strip_size_kb": 0, 00:37:50.606 "state": "online", 00:37:50.606 "raid_level": "raid1", 00:37:50.606 "superblock": true, 00:37:50.606 "num_base_bdevs": 2, 00:37:50.606 "num_base_bdevs_discovered": 1, 00:37:50.606 "num_base_bdevs_operational": 1, 00:37:50.606 "base_bdevs_list": [ 00:37:50.606 { 00:37:50.606 "name": null, 00:37:50.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:50.606 "is_configured": false, 00:37:50.606 "data_offset": 0, 00:37:50.606 "data_size": 63488 00:37:50.606 }, 00:37:50.607 { 00:37:50.607 "name": "BaseBdev2", 00:37:50.607 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:50.607 "is_configured": true, 00:37:50.607 "data_offset": 2048, 00:37:50.607 "data_size": 63488 00:37:50.607 } 00:37:50.607 ] 00:37:50.607 }' 00:37:50.607 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:50.607 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:50.607 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:50.607 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:50.607 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:50.607 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.607 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:50.607 [2024-12-09 23:19:31.150319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:50.607 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.607 23:19:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:37:50.607 [2024-12-09 23:19:31.224626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:37:50.607 [2024-12-09 23:19:31.226865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:50.866 [2024-12-09 23:19:31.335461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:50.866 [2024-12-09 23:19:31.336020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:37:51.125 [2024-12-09 23:19:31.556747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:51.125 [2024-12-09 23:19:31.557087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:37:51.384 178.33 IOPS, 535.00 MiB/s [2024-12-09T23:19:32.020Z] [2024-12-09 23:19:31.915671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:37:51.643 [2024-12-09 23:19:32.124793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:37:51.643 [2024-12-09 23:19:32.125137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.643 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:51.643 "name": "raid_bdev1", 00:37:51.643 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:51.643 "strip_size_kb": 0, 00:37:51.643 "state": "online", 00:37:51.643 "raid_level": "raid1", 00:37:51.643 "superblock": true, 00:37:51.643 "num_base_bdevs": 2, 00:37:51.643 "num_base_bdevs_discovered": 2, 00:37:51.643 "num_base_bdevs_operational": 2, 00:37:51.644 "process": { 00:37:51.644 "type": "rebuild", 00:37:51.644 "target": "spare", 00:37:51.644 "progress": { 00:37:51.644 "blocks": 10240, 00:37:51.644 "percent": 16 00:37:51.644 } 00:37:51.644 }, 00:37:51.644 "base_bdevs_list": [ 00:37:51.644 { 00:37:51.644 "name": "spare", 00:37:51.644 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:51.644 "is_configured": true, 00:37:51.644 "data_offset": 2048, 00:37:51.644 "data_size": 63488 00:37:51.644 }, 00:37:51.644 { 00:37:51.644 "name": "BaseBdev2", 00:37:51.644 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:51.644 "is_configured": true, 00:37:51.644 "data_offset": 2048, 00:37:51.644 "data_size": 63488 00:37:51.644 } 00:37:51.644 ] 00:37:51.644 }' 00:37:51.644 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:51.906 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:51.906 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:51.906 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:51.906 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:37:51.906 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:37:51.906 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:37:51.906 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:37:51.906 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:37:51.906 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:37:51.906 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=422 00:37:51.906 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:51.907 "name": "raid_bdev1", 00:37:51.907 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:51.907 "strip_size_kb": 0, 00:37:51.907 "state": "online", 00:37:51.907 "raid_level": "raid1", 00:37:51.907 "superblock": true, 00:37:51.907 "num_base_bdevs": 2, 00:37:51.907 "num_base_bdevs_discovered": 2, 00:37:51.907 "num_base_bdevs_operational": 2, 00:37:51.907 "process": { 00:37:51.907 "type": "rebuild", 00:37:51.907 "target": "spare", 00:37:51.907 "progress": { 00:37:51.907 "blocks": 12288, 00:37:51.907 "percent": 19 00:37:51.907 } 00:37:51.907 }, 00:37:51.907 "base_bdevs_list": [ 00:37:51.907 { 00:37:51.907 "name": "spare", 00:37:51.907 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:51.907 "is_configured": true, 00:37:51.907 "data_offset": 2048, 00:37:51.907 "data_size": 63488 00:37:51.907 }, 00:37:51.907 { 00:37:51.907 "name": "BaseBdev2", 00:37:51.907 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:51.907 "is_configured": true, 00:37:51.907 "data_offset": 2048, 00:37:51.907 "data_size": 63488 00:37:51.907 } 00:37:51.907 ] 00:37:51.907 }' 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:51.907 23:19:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:52.169 [2024-12-09 23:19:32.577893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:37:52.169 [2024-12-09 23:19:32.578133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:37:52.694 151.00 IOPS, 453.00 MiB/s [2024-12-09T23:19:33.330Z] [2024-12-09 23:19:33.162347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:37:52.694 [2024-12-09 23:19:33.163170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:37:52.694 [2024-12-09 23:19:33.273545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:52.959 [2024-12-09 23:19:33.509202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:52.959 "name": "raid_bdev1", 00:37:52.959 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:52.959 "strip_size_kb": 0, 00:37:52.959 "state": "online", 00:37:52.959 "raid_level": "raid1", 00:37:52.959 "superblock": true, 00:37:52.959 "num_base_bdevs": 2, 00:37:52.959 "num_base_bdevs_discovered": 2, 00:37:52.959 "num_base_bdevs_operational": 2, 00:37:52.959 "process": { 00:37:52.959 "type": "rebuild", 00:37:52.959 "target": "spare", 00:37:52.959 "progress": { 00:37:52.959 "blocks": 30720, 00:37:52.959 "percent": 48 00:37:52.959 } 00:37:52.959 }, 00:37:52.959 "base_bdevs_list": [ 00:37:52.959 { 00:37:52.959 "name": "spare", 00:37:52.959 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:52.959 "is_configured": true, 00:37:52.959 "data_offset": 2048, 00:37:52.959 "data_size": 63488 00:37:52.959 }, 00:37:52.959 { 00:37:52.959 "name": "BaseBdev2", 00:37:52.959 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:52.959 "is_configured": true, 00:37:52.959 "data_offset": 2048, 00:37:52.959 "data_size": 63488 00:37:52.959 } 00:37:52.959 ] 00:37:52.959 }' 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:52.959 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:53.217 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:53.217 23:19:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:53.476 130.20 IOPS, 390.60 MiB/s [2024-12-09T23:19:34.112Z] [2024-12-09 23:19:34.089245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:37:54.044 [2024-12-09 23:19:34.526618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:54.044 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.303 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:54.303 "name": "raid_bdev1", 00:37:54.303 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:54.303 "strip_size_kb": 0, 00:37:54.303 "state": "online", 00:37:54.303 "raid_level": "raid1", 00:37:54.303 "superblock": true, 00:37:54.303 "num_base_bdevs": 2, 00:37:54.303 "num_base_bdevs_discovered": 2, 00:37:54.304 "num_base_bdevs_operational": 2, 00:37:54.304 "process": { 00:37:54.304 "type": "rebuild", 00:37:54.304 "target": "spare", 00:37:54.304 "progress": { 00:37:54.304 "blocks": 49152, 00:37:54.304 "percent": 77 00:37:54.304 } 00:37:54.304 }, 00:37:54.304 "base_bdevs_list": [ 00:37:54.304 { 00:37:54.304 "name": "spare", 00:37:54.304 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:54.304 "is_configured": true, 00:37:54.304 "data_offset": 2048, 00:37:54.304 "data_size": 63488 00:37:54.304 }, 00:37:54.304 { 00:37:54.304 "name": "BaseBdev2", 00:37:54.304 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:54.304 "is_configured": true, 00:37:54.304 "data_offset": 2048, 00:37:54.304 "data_size": 63488 00:37:54.304 } 00:37:54.304 ] 00:37:54.304 }' 00:37:54.304 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:54.304 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:54.304 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:54.304 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:54.304 23:19:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:54.304 [2024-12-09 23:19:34.742270] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:37:54.562 115.00 IOPS, 345.00 MiB/s [2024-12-09T23:19:35.198Z] [2024-12-09 23:19:34.973248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:37:54.821 [2024-12-09 23:19:35.296335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:55.406 [2024-12-09 23:19:35.744115] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:55.406 "name": "raid_bdev1", 00:37:55.406 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:55.406 "strip_size_kb": 0, 00:37:55.406 "state": "online", 00:37:55.406 "raid_level": "raid1", 00:37:55.406 "superblock": true, 00:37:55.406 "num_base_bdevs": 2, 00:37:55.406 "num_base_bdevs_discovered": 2, 00:37:55.406 "num_base_bdevs_operational": 2, 00:37:55.406 "process": { 00:37:55.406 "type": "rebuild", 00:37:55.406 "target": "spare", 00:37:55.406 "progress": { 00:37:55.406 "blocks": 63488, 00:37:55.406 "percent": 100 00:37:55.406 } 00:37:55.406 }, 00:37:55.406 "base_bdevs_list": [ 00:37:55.406 { 00:37:55.406 "name": "spare", 00:37:55.406 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:55.406 "is_configured": true, 00:37:55.406 "data_offset": 2048, 00:37:55.406 "data_size": 63488 00:37:55.406 }, 00:37:55.406 { 00:37:55.406 "name": "BaseBdev2", 00:37:55.406 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:55.406 "is_configured": true, 00:37:55.406 "data_offset": 2048, 00:37:55.406 "data_size": 63488 00:37:55.406 } 00:37:55.406 ] 00:37:55.406 }' 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:55.406 [2024-12-09 23:19:35.843950] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:55.406 [2024-12-09 23:19:35.846566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:55.406 23:19:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:56.353 104.14 IOPS, 312.43 MiB/s [2024-12-09T23:19:36.989Z] 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:56.353 96.12 IOPS, 288.38 MiB/s [2024-12-09T23:19:36.989Z] 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:56.353 "name": "raid_bdev1", 00:37:56.353 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:56.353 "strip_size_kb": 0, 00:37:56.353 "state": "online", 00:37:56.353 "raid_level": "raid1", 00:37:56.353 "superblock": true, 00:37:56.353 "num_base_bdevs": 2, 00:37:56.353 "num_base_bdevs_discovered": 2, 00:37:56.353 "num_base_bdevs_operational": 2, 00:37:56.353 "base_bdevs_list": [ 00:37:56.353 { 00:37:56.353 "name": "spare", 00:37:56.353 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:56.353 "is_configured": true, 00:37:56.353 "data_offset": 2048, 00:37:56.353 "data_size": 63488 00:37:56.353 }, 00:37:56.353 { 00:37:56.353 "name": "BaseBdev2", 00:37:56.353 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:56.353 "is_configured": true, 00:37:56.353 "data_offset": 2048, 00:37:56.353 "data_size": 63488 00:37:56.353 } 00:37:56.353 ] 00:37:56.353 }' 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:56.353 23:19:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:56.612 "name": "raid_bdev1", 00:37:56.612 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:56.612 "strip_size_kb": 0, 00:37:56.612 "state": "online", 00:37:56.612 "raid_level": "raid1", 00:37:56.612 "superblock": true, 00:37:56.612 "num_base_bdevs": 2, 00:37:56.612 "num_base_bdevs_discovered": 2, 00:37:56.612 "num_base_bdevs_operational": 2, 00:37:56.612 "base_bdevs_list": [ 00:37:56.612 { 00:37:56.612 "name": "spare", 00:37:56.612 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:56.612 "is_configured": true, 00:37:56.612 "data_offset": 2048, 00:37:56.612 "data_size": 63488 00:37:56.612 }, 00:37:56.612 { 00:37:56.612 "name": "BaseBdev2", 00:37:56.612 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:56.612 "is_configured": true, 00:37:56.612 "data_offset": 2048, 00:37:56.612 "data_size": 63488 00:37:56.612 } 00:37:56.612 ] 00:37:56.612 }' 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:56.612 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:56.613 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.613 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.613 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:56.613 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.613 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:56.613 "name": "raid_bdev1", 00:37:56.613 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:56.613 "strip_size_kb": 0, 00:37:56.613 "state": "online", 00:37:56.613 "raid_level": "raid1", 00:37:56.613 "superblock": true, 00:37:56.613 "num_base_bdevs": 2, 00:37:56.613 "num_base_bdevs_discovered": 2, 00:37:56.613 "num_base_bdevs_operational": 2, 00:37:56.613 "base_bdevs_list": [ 00:37:56.613 { 00:37:56.613 "name": "spare", 00:37:56.613 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:56.613 "is_configured": true, 00:37:56.613 "data_offset": 2048, 00:37:56.613 "data_size": 63488 00:37:56.613 }, 00:37:56.613 { 00:37:56.613 "name": "BaseBdev2", 00:37:56.613 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:56.613 "is_configured": true, 00:37:56.613 "data_offset": 2048, 00:37:56.613 "data_size": 63488 00:37:56.613 } 00:37:56.613 ] 00:37:56.613 }' 00:37:56.613 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:56.613 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:57.181 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:57.181 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.181 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:57.181 [2024-12-09 23:19:37.595558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:57.181 [2024-12-09 23:19:37.595598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:57.181 00:37:57.181 Latency(us) 00:37:57.181 [2024-12-09T23:19:37.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.181 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:37:57.181 raid_bdev1 : 8.83 90.23 270.69 0.00 0.00 16011.54 304.32 116227.70 00:37:57.181 [2024-12-09T23:19:37.817Z] =================================================================================================================== 00:37:57.181 [2024-12-09T23:19:37.817Z] Total : 90.23 270.69 0.00 0.00 16011.54 304.32 116227.70 00:37:57.181 [2024-12-09 23:19:37.685595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:57.181 [2024-12-09 23:19:37.685678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:57.181 [2024-12-09 23:19:37.685757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:57.181 [2024-12-09 23:19:37.685772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:57.181 { 00:37:57.181 "results": [ 00:37:57.181 { 00:37:57.181 "job": "raid_bdev1", 00:37:57.181 "core_mask": "0x1", 00:37:57.181 "workload": "randrw", 00:37:57.181 "percentage": 50, 00:37:57.181 "status": "finished", 00:37:57.181 "queue_depth": 2, 00:37:57.181 "io_size": 3145728, 00:37:57.181 "runtime": 8.832946, 00:37:57.181 "iops": 90.23037161101178, 00:37:57.181 "mibps": 270.69111483303533, 00:37:57.181 "io_failed": 0, 00:37:57.181 "io_timeout": 0, 00:37:57.181 "avg_latency_us": 16011.544141937893, 00:37:57.181 "min_latency_us": 304.3212851405622, 00:37:57.181 "max_latency_us": 116227.70120481927 00:37:57.181 } 00:37:57.181 ], 00:37:57.181 "core_count": 1 00:37:57.181 } 00:37:57.181 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.181 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:57.181 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:37:57.181 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:57.182 23:19:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:37:57.441 /dev/nbd0 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:57.441 1+0 records in 00:37:57.441 1+0 records out 00:37:57.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298585 s, 13.7 MB/s 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:37:57.441 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:37:57.698 /dev/nbd1 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:57.698 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:57.698 1+0 records in 00:37:57.698 1+0 records out 00:37:57.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489085 s, 8.4 MB/s 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:57.957 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:58.216 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:58.484 [2024-12-09 23:19:38.996034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:58.484 [2024-12-09 23:19:38.996099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:58.484 [2024-12-09 23:19:38.996122] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:37:58.484 [2024-12-09 23:19:38.996135] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:58.484 [2024-12-09 23:19:38.998636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:58.484 [2024-12-09 23:19:38.998681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:58.484 [2024-12-09 23:19:38.998781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:58.484 [2024-12-09 23:19:38.998838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:58.484 [2024-12-09 23:19:38.998986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:58.484 spare 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:37:58.484 23:19:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.484 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:58.484 [2024-12-09 23:19:39.098950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:37:58.484 [2024-12-09 23:19:39.099007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:58.484 [2024-12-09 23:19:39.099425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:37:58.485 [2024-12-09 23:19:39.099652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:37:58.485 [2024-12-09 23:19:39.099669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:37:58.485 [2024-12-09 23:19:39.099909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:58.485 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:58.756 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.756 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:58.756 "name": "raid_bdev1", 00:37:58.756 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:58.756 "strip_size_kb": 0, 00:37:58.756 "state": "online", 00:37:58.756 "raid_level": "raid1", 00:37:58.756 "superblock": true, 00:37:58.756 "num_base_bdevs": 2, 00:37:58.756 "num_base_bdevs_discovered": 2, 00:37:58.756 "num_base_bdevs_operational": 2, 00:37:58.756 "base_bdevs_list": [ 00:37:58.756 { 00:37:58.756 "name": "spare", 00:37:58.756 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:58.756 "is_configured": true, 00:37:58.756 "data_offset": 2048, 00:37:58.756 "data_size": 63488 00:37:58.756 }, 00:37:58.756 { 00:37:58.756 "name": "BaseBdev2", 00:37:58.756 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:58.756 "is_configured": true, 00:37:58.756 "data_offset": 2048, 00:37:58.756 "data_size": 63488 00:37:58.756 } 00:37:58.756 ] 00:37:58.756 }' 00:37:58.756 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:58.756 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:59.015 "name": "raid_bdev1", 00:37:59.015 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:59.015 "strip_size_kb": 0, 00:37:59.015 "state": "online", 00:37:59.015 "raid_level": "raid1", 00:37:59.015 "superblock": true, 00:37:59.015 "num_base_bdevs": 2, 00:37:59.015 "num_base_bdevs_discovered": 2, 00:37:59.015 "num_base_bdevs_operational": 2, 00:37:59.015 "base_bdevs_list": [ 00:37:59.015 { 00:37:59.015 "name": "spare", 00:37:59.015 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:37:59.015 "is_configured": true, 00:37:59.015 "data_offset": 2048, 00:37:59.015 "data_size": 63488 00:37:59.015 }, 00:37:59.015 { 00:37:59.015 "name": "BaseBdev2", 00:37:59.015 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:59.015 "is_configured": true, 00:37:59.015 "data_offset": 2048, 00:37:59.015 "data_size": 63488 00:37:59.015 } 00:37:59.015 ] 00:37:59.015 }' 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.015 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:59.274 [2024-12-09 23:19:39.667332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:59.274 "name": "raid_bdev1", 00:37:59.274 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:37:59.274 "strip_size_kb": 0, 00:37:59.274 "state": "online", 00:37:59.274 "raid_level": "raid1", 00:37:59.274 "superblock": true, 00:37:59.274 "num_base_bdevs": 2, 00:37:59.274 "num_base_bdevs_discovered": 1, 00:37:59.274 "num_base_bdevs_operational": 1, 00:37:59.274 "base_bdevs_list": [ 00:37:59.274 { 00:37:59.274 "name": null, 00:37:59.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.274 "is_configured": false, 00:37:59.274 "data_offset": 0, 00:37:59.274 "data_size": 63488 00:37:59.274 }, 00:37:59.274 { 00:37:59.274 "name": "BaseBdev2", 00:37:59.274 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:37:59.274 "is_configured": true, 00:37:59.274 "data_offset": 2048, 00:37:59.274 "data_size": 63488 00:37:59.274 } 00:37:59.274 ] 00:37:59.274 }' 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:59.274 23:19:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:59.533 23:19:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:59.534 23:19:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.534 23:19:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:37:59.534 [2024-12-09 23:19:40.090845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:59.534 [2024-12-09 23:19:40.091346] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:59.534 [2024-12-09 23:19:40.091383] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:59.534 [2024-12-09 23:19:40.091475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:59.534 [2024-12-09 23:19:40.110440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:37:59.534 23:19:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.534 23:19:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:37:59.534 [2024-12-09 23:19:40.112618] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:00.917 "name": "raid_bdev1", 00:38:00.917 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:38:00.917 "strip_size_kb": 0, 00:38:00.917 "state": "online", 00:38:00.917 "raid_level": "raid1", 00:38:00.917 "superblock": true, 00:38:00.917 "num_base_bdevs": 2, 00:38:00.917 "num_base_bdevs_discovered": 2, 00:38:00.917 "num_base_bdevs_operational": 2, 00:38:00.917 "process": { 00:38:00.917 "type": "rebuild", 00:38:00.917 "target": "spare", 00:38:00.917 "progress": { 00:38:00.917 "blocks": 20480, 00:38:00.917 "percent": 32 00:38:00.917 } 00:38:00.917 }, 00:38:00.917 "base_bdevs_list": [ 00:38:00.917 { 00:38:00.917 "name": "spare", 00:38:00.917 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:38:00.917 "is_configured": true, 00:38:00.917 "data_offset": 2048, 00:38:00.917 "data_size": 63488 00:38:00.917 }, 00:38:00.917 { 00:38:00.917 "name": "BaseBdev2", 00:38:00.917 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:38:00.917 "is_configured": true, 00:38:00.917 "data_offset": 2048, 00:38:00.917 "data_size": 63488 00:38:00.917 } 00:38:00.917 ] 00:38:00.917 }' 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:00.917 [2024-12-09 23:19:41.244594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:00.917 [2024-12-09 23:19:41.318469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:00.917 [2024-12-09 23:19:41.318559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:00.917 [2024-12-09 23:19:41.318580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:00.917 [2024-12-09 23:19:41.318589] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:00.917 "name": "raid_bdev1", 00:38:00.917 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:38:00.917 "strip_size_kb": 0, 00:38:00.917 "state": "online", 00:38:00.917 "raid_level": "raid1", 00:38:00.917 "superblock": true, 00:38:00.917 "num_base_bdevs": 2, 00:38:00.917 "num_base_bdevs_discovered": 1, 00:38:00.917 "num_base_bdevs_operational": 1, 00:38:00.917 "base_bdevs_list": [ 00:38:00.917 { 00:38:00.917 "name": null, 00:38:00.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.917 "is_configured": false, 00:38:00.917 "data_offset": 0, 00:38:00.917 "data_size": 63488 00:38:00.917 }, 00:38:00.917 { 00:38:00.917 "name": "BaseBdev2", 00:38:00.917 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:38:00.917 "is_configured": true, 00:38:00.917 "data_offset": 2048, 00:38:00.917 "data_size": 63488 00:38:00.917 } 00:38:00.917 ] 00:38:00.917 }' 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:00.917 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:01.176 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:01.176 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.176 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:01.176 [2024-12-09 23:19:41.806486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:01.176 [2024-12-09 23:19:41.806560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:01.176 [2024-12-09 23:19:41.806590] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:38:01.176 [2024-12-09 23:19:41.806602] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:01.176 [2024-12-09 23:19:41.807134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:01.176 [2024-12-09 23:19:41.807156] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:01.176 [2024-12-09 23:19:41.807259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:01.176 [2024-12-09 23:19:41.807274] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:01.176 [2024-12-09 23:19:41.807291] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:01.176 [2024-12-09 23:19:41.807318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:01.438 [2024-12-09 23:19:41.824477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:38:01.438 spare 00:38:01.438 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.438 23:19:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:38:01.438 [2024-12-09 23:19:41.826969] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:02.380 "name": "raid_bdev1", 00:38:02.380 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:38:02.380 "strip_size_kb": 0, 00:38:02.380 "state": "online", 00:38:02.380 "raid_level": "raid1", 00:38:02.380 "superblock": true, 00:38:02.380 "num_base_bdevs": 2, 00:38:02.380 "num_base_bdevs_discovered": 2, 00:38:02.380 "num_base_bdevs_operational": 2, 00:38:02.380 "process": { 00:38:02.380 "type": "rebuild", 00:38:02.380 "target": "spare", 00:38:02.380 "progress": { 00:38:02.380 "blocks": 20480, 00:38:02.380 "percent": 32 00:38:02.380 } 00:38:02.380 }, 00:38:02.380 "base_bdevs_list": [ 00:38:02.380 { 00:38:02.380 "name": "spare", 00:38:02.380 "uuid": "62060ce2-09ca-5974-bbcf-a4c027be80ff", 00:38:02.380 "is_configured": true, 00:38:02.380 "data_offset": 2048, 00:38:02.380 "data_size": 63488 00:38:02.380 }, 00:38:02.380 { 00:38:02.380 "name": "BaseBdev2", 00:38:02.380 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:38:02.380 "is_configured": true, 00:38:02.380 "data_offset": 2048, 00:38:02.380 "data_size": 63488 00:38:02.380 } 00:38:02.380 ] 00:38:02.380 }' 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.380 23:19:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:02.380 [2024-12-09 23:19:42.978559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:02.638 [2024-12-09 23:19:43.032809] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:02.638 [2024-12-09 23:19:43.032917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:02.638 [2024-12-09 23:19:43.032935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:02.638 [2024-12-09 23:19:43.032951] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:02.638 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.638 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:02.638 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:02.638 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:02.638 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:02.638 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:02.639 "name": "raid_bdev1", 00:38:02.639 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:38:02.639 "strip_size_kb": 0, 00:38:02.639 "state": "online", 00:38:02.639 "raid_level": "raid1", 00:38:02.639 "superblock": true, 00:38:02.639 "num_base_bdevs": 2, 00:38:02.639 "num_base_bdevs_discovered": 1, 00:38:02.639 "num_base_bdevs_operational": 1, 00:38:02.639 "base_bdevs_list": [ 00:38:02.639 { 00:38:02.639 "name": null, 00:38:02.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:02.639 "is_configured": false, 00:38:02.639 "data_offset": 0, 00:38:02.639 "data_size": 63488 00:38:02.639 }, 00:38:02.639 { 00:38:02.639 "name": "BaseBdev2", 00:38:02.639 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:38:02.639 "is_configured": true, 00:38:02.639 "data_offset": 2048, 00:38:02.639 "data_size": 63488 00:38:02.639 } 00:38:02.639 ] 00:38:02.639 }' 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:02.639 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:02.897 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:02.897 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:02.897 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:02.897 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:02.897 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:02.897 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:02.897 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.897 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:02.897 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:02.897 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.156 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:03.156 "name": "raid_bdev1", 00:38:03.156 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:38:03.156 "strip_size_kb": 0, 00:38:03.156 "state": "online", 00:38:03.156 "raid_level": "raid1", 00:38:03.156 "superblock": true, 00:38:03.156 "num_base_bdevs": 2, 00:38:03.156 "num_base_bdevs_discovered": 1, 00:38:03.156 "num_base_bdevs_operational": 1, 00:38:03.156 "base_bdevs_list": [ 00:38:03.156 { 00:38:03.156 "name": null, 00:38:03.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:03.156 "is_configured": false, 00:38:03.156 "data_offset": 0, 00:38:03.156 "data_size": 63488 00:38:03.156 }, 00:38:03.156 { 00:38:03.156 "name": "BaseBdev2", 00:38:03.156 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:38:03.156 "is_configured": true, 00:38:03.156 "data_offset": 2048, 00:38:03.156 "data_size": 63488 00:38:03.156 } 00:38:03.156 ] 00:38:03.156 }' 00:38:03.156 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:03.156 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:03.156 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:03.156 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:03.156 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:38:03.156 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.157 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:03.157 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.157 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:03.157 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.157 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:03.157 [2024-12-09 23:19:43.628898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:03.157 [2024-12-09 23:19:43.628964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:03.157 [2024-12-09 23:19:43.628988] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:38:03.157 [2024-12-09 23:19:43.629002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:03.157 [2024-12-09 23:19:43.629479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:03.157 [2024-12-09 23:19:43.629509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:03.157 [2024-12-09 23:19:43.629597] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:03.157 [2024-12-09 23:19:43.629615] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:03.157 [2024-12-09 23:19:43.629624] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:03.157 [2024-12-09 23:19:43.629640] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:38:03.157 BaseBdev1 00:38:03.157 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.157 23:19:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:04.094 "name": "raid_bdev1", 00:38:04.094 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:38:04.094 "strip_size_kb": 0, 00:38:04.094 "state": "online", 00:38:04.094 "raid_level": "raid1", 00:38:04.094 "superblock": true, 00:38:04.094 "num_base_bdevs": 2, 00:38:04.094 "num_base_bdevs_discovered": 1, 00:38:04.094 "num_base_bdevs_operational": 1, 00:38:04.094 "base_bdevs_list": [ 00:38:04.094 { 00:38:04.094 "name": null, 00:38:04.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:04.094 "is_configured": false, 00:38:04.094 "data_offset": 0, 00:38:04.094 "data_size": 63488 00:38:04.094 }, 00:38:04.094 { 00:38:04.094 "name": "BaseBdev2", 00:38:04.094 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:38:04.094 "is_configured": true, 00:38:04.094 "data_offset": 2048, 00:38:04.094 "data_size": 63488 00:38:04.094 } 00:38:04.094 ] 00:38:04.094 }' 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:04.094 23:19:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:04.660 "name": "raid_bdev1", 00:38:04.660 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:38:04.660 "strip_size_kb": 0, 00:38:04.660 "state": "online", 00:38:04.660 "raid_level": "raid1", 00:38:04.660 "superblock": true, 00:38:04.660 "num_base_bdevs": 2, 00:38:04.660 "num_base_bdevs_discovered": 1, 00:38:04.660 "num_base_bdevs_operational": 1, 00:38:04.660 "base_bdevs_list": [ 00:38:04.660 { 00:38:04.660 "name": null, 00:38:04.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:04.660 "is_configured": false, 00:38:04.660 "data_offset": 0, 00:38:04.660 "data_size": 63488 00:38:04.660 }, 00:38:04.660 { 00:38:04.660 "name": "BaseBdev2", 00:38:04.660 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:38:04.660 "is_configured": true, 00:38:04.660 "data_offset": 2048, 00:38:04.660 "data_size": 63488 00:38:04.660 } 00:38:04.660 ] 00:38:04.660 }' 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:04.660 [2024-12-09 23:19:45.231106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:04.660 [2024-12-09 23:19:45.231430] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:04.660 [2024-12-09 23:19:45.231457] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:04.660 request: 00:38:04.660 { 00:38:04.660 "base_bdev": "BaseBdev1", 00:38:04.660 "raid_bdev": "raid_bdev1", 00:38:04.660 "method": "bdev_raid_add_base_bdev", 00:38:04.660 "req_id": 1 00:38:04.660 } 00:38:04.660 Got JSON-RPC error response 00:38:04.660 response: 00:38:04.660 { 00:38:04.660 "code": -22, 00:38:04.660 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:04.660 } 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:04.660 23:19:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:06.034 "name": "raid_bdev1", 00:38:06.034 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:38:06.034 "strip_size_kb": 0, 00:38:06.034 "state": "online", 00:38:06.034 "raid_level": "raid1", 00:38:06.034 "superblock": true, 00:38:06.034 "num_base_bdevs": 2, 00:38:06.034 "num_base_bdevs_discovered": 1, 00:38:06.034 "num_base_bdevs_operational": 1, 00:38:06.034 "base_bdevs_list": [ 00:38:06.034 { 00:38:06.034 "name": null, 00:38:06.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:06.034 "is_configured": false, 00:38:06.034 "data_offset": 0, 00:38:06.034 "data_size": 63488 00:38:06.034 }, 00:38:06.034 { 00:38:06.034 "name": "BaseBdev2", 00:38:06.034 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:38:06.034 "is_configured": true, 00:38:06.034 "data_offset": 2048, 00:38:06.034 "data_size": 63488 00:38:06.034 } 00:38:06.034 ] 00:38:06.034 }' 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:06.034 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.292 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:06.293 "name": "raid_bdev1", 00:38:06.293 "uuid": "be27a85f-889e-40ab-ae1f-004cd8aa2486", 00:38:06.293 "strip_size_kb": 0, 00:38:06.293 "state": "online", 00:38:06.293 "raid_level": "raid1", 00:38:06.293 "superblock": true, 00:38:06.293 "num_base_bdevs": 2, 00:38:06.293 "num_base_bdevs_discovered": 1, 00:38:06.293 "num_base_bdevs_operational": 1, 00:38:06.293 "base_bdevs_list": [ 00:38:06.293 { 00:38:06.293 "name": null, 00:38:06.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:06.293 "is_configured": false, 00:38:06.293 "data_offset": 0, 00:38:06.293 "data_size": 63488 00:38:06.293 }, 00:38:06.293 { 00:38:06.293 "name": "BaseBdev2", 00:38:06.293 "uuid": "595fac0d-b132-5c7a-9de3-a30294af0788", 00:38:06.293 "is_configured": true, 00:38:06.293 "data_offset": 2048, 00:38:06.293 "data_size": 63488 00:38:06.293 } 00:38:06.293 ] 00:38:06.293 }' 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76719 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76719 ']' 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76719 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76719 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:06.293 killing process with pid 76719 00:38:06.293 Received shutdown signal, test time was about 18.001561 seconds 00:38:06.293 00:38:06.293 Latency(us) 00:38:06.293 [2024-12-09T23:19:46.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.293 [2024-12-09T23:19:46.929Z] =================================================================================================================== 00:38:06.293 [2024-12-09T23:19:46.929Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76719' 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76719 00:38:06.293 [2024-12-09 23:19:46.817158] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:06.293 23:19:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76719 00:38:06.293 [2024-12-09 23:19:46.817293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:06.293 [2024-12-09 23:19:46.817360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:06.293 [2024-12-09 23:19:46.817372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:38:06.550 [2024-12-09 23:19:47.049860] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:07.921 23:19:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:38:07.921 00:38:07.921 real 0m21.208s 00:38:07.921 user 0m27.333s 00:38:07.921 sys 0m2.459s 00:38:07.921 23:19:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.921 ************************************ 00:38:07.921 END TEST raid_rebuild_test_sb_io 00:38:07.921 ************************************ 00:38:07.921 23:19:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:07.921 23:19:48 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:38:07.921 23:19:48 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:38:07.921 23:19:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:38:07.921 23:19:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:07.921 23:19:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:07.921 ************************************ 00:38:07.921 START TEST raid_rebuild_test 00:38:07.921 ************************************ 00:38:07.921 23:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:38:07.921 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77429 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77429 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77429 ']' 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:07.922 23:19:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.922 [2024-12-09 23:19:48.435176] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:07.922 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:07.922 Zero copy mechanism will not be used. 00:38:07.922 [2024-12-09 23:19:48.435493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77429 ] 00:38:08.180 [2024-12-09 23:19:48.616950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.180 [2024-12-09 23:19:48.734424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.438 [2024-12-09 23:19:48.948160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:08.438 [2024-12-09 23:19:48.948234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:08.696 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:08.696 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:38:08.696 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:08.696 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:08.696 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.696 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.696 BaseBdev1_malloc 00:38:08.696 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.696 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:08.696 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.696 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.696 [2024-12-09 23:19:49.328088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:08.696 [2024-12-09 23:19:49.328158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:08.696 [2024-12-09 23:19:49.328186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:08.696 [2024-12-09 23:19:49.328201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:08.697 [2024-12-09 23:19:49.330615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:08.955 [2024-12-09 23:19:49.330808] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:08.955 BaseBdev1 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.955 BaseBdev2_malloc 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.955 [2024-12-09 23:19:49.384257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:08.955 [2024-12-09 23:19:49.384326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:08.955 [2024-12-09 23:19:49.384349] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:08.955 [2024-12-09 23:19:49.384364] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:08.955 [2024-12-09 23:19:49.386743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:08.955 [2024-12-09 23:19:49.386787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:08.955 BaseBdev2 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.955 BaseBdev3_malloc 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.955 [2024-12-09 23:19:49.452505] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:38:08.955 [2024-12-09 23:19:49.452570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:08.955 [2024-12-09 23:19:49.452598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:08.955 [2024-12-09 23:19:49.452612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:08.955 [2024-12-09 23:19:49.455075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:08.955 [2024-12-09 23:19:49.455258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:08.955 BaseBdev3 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.955 BaseBdev4_malloc 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.955 [2024-12-09 23:19:49.511343] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:38:08.955 [2024-12-09 23:19:49.511428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:08.955 [2024-12-09 23:19:49.511454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:38:08.955 [2024-12-09 23:19:49.511469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:08.955 [2024-12-09 23:19:49.513960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:08.955 [2024-12-09 23:19:49.514133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:08.955 BaseBdev4 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.955 spare_malloc 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.955 spare_delay 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.955 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:08.956 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.956 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:08.956 [2024-12-09 23:19:49.580550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:08.956 [2024-12-09 23:19:49.580746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:08.956 [2024-12-09 23:19:49.580796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:38:08.956 [2024-12-09 23:19:49.580813] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:08.956 [2024-12-09 23:19:49.583455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:08.956 [2024-12-09 23:19:49.583498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:08.956 spare 00:38:08.956 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.956 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:38:08.956 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.956 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.213 [2024-12-09 23:19:49.592582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:09.213 [2024-12-09 23:19:49.594661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:09.213 [2024-12-09 23:19:49.594855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:09.213 [2024-12-09 23:19:49.594922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:09.213 [2024-12-09 23:19:49.595013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:38:09.213 [2024-12-09 23:19:49.595029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:38:09.213 [2024-12-09 23:19:49.595304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:09.213 [2024-12-09 23:19:49.595487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:38:09.213 [2024-12-09 23:19:49.595503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:38:09.213 [2024-12-09 23:19:49.595658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:09.213 "name": "raid_bdev1", 00:38:09.213 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:09.213 "strip_size_kb": 0, 00:38:09.213 "state": "online", 00:38:09.213 "raid_level": "raid1", 00:38:09.213 "superblock": false, 00:38:09.213 "num_base_bdevs": 4, 00:38:09.213 "num_base_bdevs_discovered": 4, 00:38:09.213 "num_base_bdevs_operational": 4, 00:38:09.213 "base_bdevs_list": [ 00:38:09.213 { 00:38:09.213 "name": "BaseBdev1", 00:38:09.213 "uuid": "b8154d47-5a73-5275-8a7d-f32044db0479", 00:38:09.213 "is_configured": true, 00:38:09.213 "data_offset": 0, 00:38:09.213 "data_size": 65536 00:38:09.213 }, 00:38:09.213 { 00:38:09.213 "name": "BaseBdev2", 00:38:09.213 "uuid": "5c5bdc09-11de-553f-b505-edbe8d23dca1", 00:38:09.213 "is_configured": true, 00:38:09.213 "data_offset": 0, 00:38:09.213 "data_size": 65536 00:38:09.213 }, 00:38:09.213 { 00:38:09.213 "name": "BaseBdev3", 00:38:09.213 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:09.213 "is_configured": true, 00:38:09.213 "data_offset": 0, 00:38:09.213 "data_size": 65536 00:38:09.213 }, 00:38:09.213 { 00:38:09.213 "name": "BaseBdev4", 00:38:09.213 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:09.213 "is_configured": true, 00:38:09.213 "data_offset": 0, 00:38:09.213 "data_size": 65536 00:38:09.213 } 00:38:09.213 ] 00:38:09.213 }' 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:09.213 23:19:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.471 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:09.471 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:38:09.471 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.471 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.471 [2024-12-09 23:19:50.056435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:09.471 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.471 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:38:09.471 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:09.471 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:09.471 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.471 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:09.729 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:09.729 [2024-12-09 23:19:50.351708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:38:09.986 /dev/nbd0 00:38:09.986 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:09.986 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:09.986 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:09.986 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:38:09.986 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:09.986 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:09.986 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:09.987 1+0 records in 00:38:09.987 1+0 records out 00:38:09.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381136 s, 10.7 MB/s 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:38:09.987 23:19:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:38:16.544 65536+0 records in 00:38:16.544 65536+0 records out 00:38:16.545 33554432 bytes (34 MB, 32 MiB) copied, 6.67824 s, 5.0 MB/s 00:38:16.545 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:38:16.545 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:16.545 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:16.545 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:16.545 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:38:16.545 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:16.545 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:16.803 [2024-12-09 23:19:57.301449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.803 23:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:16.804 [2024-12-09 23:19:57.337489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:16.804 "name": "raid_bdev1", 00:38:16.804 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:16.804 "strip_size_kb": 0, 00:38:16.804 "state": "online", 00:38:16.804 "raid_level": "raid1", 00:38:16.804 "superblock": false, 00:38:16.804 "num_base_bdevs": 4, 00:38:16.804 "num_base_bdevs_discovered": 3, 00:38:16.804 "num_base_bdevs_operational": 3, 00:38:16.804 "base_bdevs_list": [ 00:38:16.804 { 00:38:16.804 "name": null, 00:38:16.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:16.804 "is_configured": false, 00:38:16.804 "data_offset": 0, 00:38:16.804 "data_size": 65536 00:38:16.804 }, 00:38:16.804 { 00:38:16.804 "name": "BaseBdev2", 00:38:16.804 "uuid": "5c5bdc09-11de-553f-b505-edbe8d23dca1", 00:38:16.804 "is_configured": true, 00:38:16.804 "data_offset": 0, 00:38:16.804 "data_size": 65536 00:38:16.804 }, 00:38:16.804 { 00:38:16.804 "name": "BaseBdev3", 00:38:16.804 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:16.804 "is_configured": true, 00:38:16.804 "data_offset": 0, 00:38:16.804 "data_size": 65536 00:38:16.804 }, 00:38:16.804 { 00:38:16.804 "name": "BaseBdev4", 00:38:16.804 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:16.804 "is_configured": true, 00:38:16.804 "data_offset": 0, 00:38:16.804 "data_size": 65536 00:38:16.804 } 00:38:16.804 ] 00:38:16.804 }' 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:16.804 23:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:17.372 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:17.372 23:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.372 23:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:17.372 [2024-12-09 23:19:57.712942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:17.372 [2024-12-09 23:19:57.730410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:38:17.372 23:19:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.372 23:19:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:38:17.372 [2024-12-09 23:19:57.732579] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:18.306 "name": "raid_bdev1", 00:38:18.306 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:18.306 "strip_size_kb": 0, 00:38:18.306 "state": "online", 00:38:18.306 "raid_level": "raid1", 00:38:18.306 "superblock": false, 00:38:18.306 "num_base_bdevs": 4, 00:38:18.306 "num_base_bdevs_discovered": 4, 00:38:18.306 "num_base_bdevs_operational": 4, 00:38:18.306 "process": { 00:38:18.306 "type": "rebuild", 00:38:18.306 "target": "spare", 00:38:18.306 "progress": { 00:38:18.306 "blocks": 20480, 00:38:18.306 "percent": 31 00:38:18.306 } 00:38:18.306 }, 00:38:18.306 "base_bdevs_list": [ 00:38:18.306 { 00:38:18.306 "name": "spare", 00:38:18.306 "uuid": "18515c47-d8f9-5e94-9a63-d58d5704cb48", 00:38:18.306 "is_configured": true, 00:38:18.306 "data_offset": 0, 00:38:18.306 "data_size": 65536 00:38:18.306 }, 00:38:18.306 { 00:38:18.306 "name": "BaseBdev2", 00:38:18.306 "uuid": "5c5bdc09-11de-553f-b505-edbe8d23dca1", 00:38:18.306 "is_configured": true, 00:38:18.306 "data_offset": 0, 00:38:18.306 "data_size": 65536 00:38:18.306 }, 00:38:18.306 { 00:38:18.306 "name": "BaseBdev3", 00:38:18.306 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:18.306 "is_configured": true, 00:38:18.306 "data_offset": 0, 00:38:18.306 "data_size": 65536 00:38:18.306 }, 00:38:18.306 { 00:38:18.306 "name": "BaseBdev4", 00:38:18.306 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:18.306 "is_configured": true, 00:38:18.306 "data_offset": 0, 00:38:18.306 "data_size": 65536 00:38:18.306 } 00:38:18.306 ] 00:38:18.306 }' 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.306 23:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:18.306 [2024-12-09 23:19:58.860149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:18.306 [2024-12-09 23:19:58.938309] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:18.306 [2024-12-09 23:19:58.938709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:18.306 [2024-12-09 23:19:58.938832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:18.306 [2024-12-09 23:19:58.938882] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:18.564 23:19:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.564 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:18.564 "name": "raid_bdev1", 00:38:18.564 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:18.564 "strip_size_kb": 0, 00:38:18.564 "state": "online", 00:38:18.564 "raid_level": "raid1", 00:38:18.564 "superblock": false, 00:38:18.564 "num_base_bdevs": 4, 00:38:18.564 "num_base_bdevs_discovered": 3, 00:38:18.564 "num_base_bdevs_operational": 3, 00:38:18.564 "base_bdevs_list": [ 00:38:18.564 { 00:38:18.564 "name": null, 00:38:18.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:18.564 "is_configured": false, 00:38:18.564 "data_offset": 0, 00:38:18.564 "data_size": 65536 00:38:18.564 }, 00:38:18.564 { 00:38:18.564 "name": "BaseBdev2", 00:38:18.564 "uuid": "5c5bdc09-11de-553f-b505-edbe8d23dca1", 00:38:18.564 "is_configured": true, 00:38:18.564 "data_offset": 0, 00:38:18.564 "data_size": 65536 00:38:18.564 }, 00:38:18.564 { 00:38:18.564 "name": "BaseBdev3", 00:38:18.564 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:18.564 "is_configured": true, 00:38:18.564 "data_offset": 0, 00:38:18.564 "data_size": 65536 00:38:18.564 }, 00:38:18.564 { 00:38:18.564 "name": "BaseBdev4", 00:38:18.564 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:18.564 "is_configured": true, 00:38:18.564 "data_offset": 0, 00:38:18.564 "data_size": 65536 00:38:18.564 } 00:38:18.564 ] 00:38:18.564 }' 00:38:18.564 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:18.564 23:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:18.822 "name": "raid_bdev1", 00:38:18.822 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:18.822 "strip_size_kb": 0, 00:38:18.822 "state": "online", 00:38:18.822 "raid_level": "raid1", 00:38:18.822 "superblock": false, 00:38:18.822 "num_base_bdevs": 4, 00:38:18.822 "num_base_bdevs_discovered": 3, 00:38:18.822 "num_base_bdevs_operational": 3, 00:38:18.822 "base_bdevs_list": [ 00:38:18.822 { 00:38:18.822 "name": null, 00:38:18.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:18.822 "is_configured": false, 00:38:18.822 "data_offset": 0, 00:38:18.822 "data_size": 65536 00:38:18.822 }, 00:38:18.822 { 00:38:18.822 "name": "BaseBdev2", 00:38:18.822 "uuid": "5c5bdc09-11de-553f-b505-edbe8d23dca1", 00:38:18.822 "is_configured": true, 00:38:18.822 "data_offset": 0, 00:38:18.822 "data_size": 65536 00:38:18.822 }, 00:38:18.822 { 00:38:18.822 "name": "BaseBdev3", 00:38:18.822 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:18.822 "is_configured": true, 00:38:18.822 "data_offset": 0, 00:38:18.822 "data_size": 65536 00:38:18.822 }, 00:38:18.822 { 00:38:18.822 "name": "BaseBdev4", 00:38:18.822 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:18.822 "is_configured": true, 00:38:18.822 "data_offset": 0, 00:38:18.822 "data_size": 65536 00:38:18.822 } 00:38:18.822 ] 00:38:18.822 }' 00:38:18.822 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:19.080 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:19.080 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:19.080 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:19.080 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:19.080 23:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.080 23:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:19.080 [2024-12-09 23:19:59.541546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:19.080 [2024-12-09 23:19:59.556346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:38:19.080 23:19:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.080 23:19:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:38:19.080 [2024-12-09 23:19:59.558534] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:20.016 "name": "raid_bdev1", 00:38:20.016 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:20.016 "strip_size_kb": 0, 00:38:20.016 "state": "online", 00:38:20.016 "raid_level": "raid1", 00:38:20.016 "superblock": false, 00:38:20.016 "num_base_bdevs": 4, 00:38:20.016 "num_base_bdevs_discovered": 4, 00:38:20.016 "num_base_bdevs_operational": 4, 00:38:20.016 "process": { 00:38:20.016 "type": "rebuild", 00:38:20.016 "target": "spare", 00:38:20.016 "progress": { 00:38:20.016 "blocks": 20480, 00:38:20.016 "percent": 31 00:38:20.016 } 00:38:20.016 }, 00:38:20.016 "base_bdevs_list": [ 00:38:20.016 { 00:38:20.016 "name": "spare", 00:38:20.016 "uuid": "18515c47-d8f9-5e94-9a63-d58d5704cb48", 00:38:20.016 "is_configured": true, 00:38:20.016 "data_offset": 0, 00:38:20.016 "data_size": 65536 00:38:20.016 }, 00:38:20.016 { 00:38:20.016 "name": "BaseBdev2", 00:38:20.016 "uuid": "5c5bdc09-11de-553f-b505-edbe8d23dca1", 00:38:20.016 "is_configured": true, 00:38:20.016 "data_offset": 0, 00:38:20.016 "data_size": 65536 00:38:20.016 }, 00:38:20.016 { 00:38:20.016 "name": "BaseBdev3", 00:38:20.016 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:20.016 "is_configured": true, 00:38:20.016 "data_offset": 0, 00:38:20.016 "data_size": 65536 00:38:20.016 }, 00:38:20.016 { 00:38:20.016 "name": "BaseBdev4", 00:38:20.016 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:20.016 "is_configured": true, 00:38:20.016 "data_offset": 0, 00:38:20.016 "data_size": 65536 00:38:20.016 } 00:38:20.016 ] 00:38:20.016 }' 00:38:20.016 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:20.275 [2024-12-09 23:20:00.694587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:20.275 [2024-12-09 23:20:00.764179] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:20.275 "name": "raid_bdev1", 00:38:20.275 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:20.275 "strip_size_kb": 0, 00:38:20.275 "state": "online", 00:38:20.275 "raid_level": "raid1", 00:38:20.275 "superblock": false, 00:38:20.275 "num_base_bdevs": 4, 00:38:20.275 "num_base_bdevs_discovered": 3, 00:38:20.275 "num_base_bdevs_operational": 3, 00:38:20.275 "process": { 00:38:20.275 "type": "rebuild", 00:38:20.275 "target": "spare", 00:38:20.275 "progress": { 00:38:20.275 "blocks": 24576, 00:38:20.275 "percent": 37 00:38:20.275 } 00:38:20.275 }, 00:38:20.275 "base_bdevs_list": [ 00:38:20.275 { 00:38:20.275 "name": "spare", 00:38:20.275 "uuid": "18515c47-d8f9-5e94-9a63-d58d5704cb48", 00:38:20.275 "is_configured": true, 00:38:20.275 "data_offset": 0, 00:38:20.275 "data_size": 65536 00:38:20.275 }, 00:38:20.275 { 00:38:20.275 "name": null, 00:38:20.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:20.275 "is_configured": false, 00:38:20.275 "data_offset": 0, 00:38:20.275 "data_size": 65536 00:38:20.275 }, 00:38:20.275 { 00:38:20.275 "name": "BaseBdev3", 00:38:20.275 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:20.275 "is_configured": true, 00:38:20.275 "data_offset": 0, 00:38:20.275 "data_size": 65536 00:38:20.275 }, 00:38:20.275 { 00:38:20.275 "name": "BaseBdev4", 00:38:20.275 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:20.275 "is_configured": true, 00:38:20.275 "data_offset": 0, 00:38:20.275 "data_size": 65536 00:38:20.275 } 00:38:20.275 ] 00:38:20.275 }' 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=450 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:20.275 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:20.533 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:20.533 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:20.533 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.533 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:20.533 23:20:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.533 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:20.533 "name": "raid_bdev1", 00:38:20.533 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:20.533 "strip_size_kb": 0, 00:38:20.533 "state": "online", 00:38:20.533 "raid_level": "raid1", 00:38:20.534 "superblock": false, 00:38:20.534 "num_base_bdevs": 4, 00:38:20.534 "num_base_bdevs_discovered": 3, 00:38:20.534 "num_base_bdevs_operational": 3, 00:38:20.534 "process": { 00:38:20.534 "type": "rebuild", 00:38:20.534 "target": "spare", 00:38:20.534 "progress": { 00:38:20.534 "blocks": 26624, 00:38:20.534 "percent": 40 00:38:20.534 } 00:38:20.534 }, 00:38:20.534 "base_bdevs_list": [ 00:38:20.534 { 00:38:20.534 "name": "spare", 00:38:20.534 "uuid": "18515c47-d8f9-5e94-9a63-d58d5704cb48", 00:38:20.534 "is_configured": true, 00:38:20.534 "data_offset": 0, 00:38:20.534 "data_size": 65536 00:38:20.534 }, 00:38:20.534 { 00:38:20.534 "name": null, 00:38:20.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:20.534 "is_configured": false, 00:38:20.534 "data_offset": 0, 00:38:20.534 "data_size": 65536 00:38:20.534 }, 00:38:20.534 { 00:38:20.534 "name": "BaseBdev3", 00:38:20.534 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:20.534 "is_configured": true, 00:38:20.534 "data_offset": 0, 00:38:20.534 "data_size": 65536 00:38:20.534 }, 00:38:20.534 { 00:38:20.534 "name": "BaseBdev4", 00:38:20.534 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:20.534 "is_configured": true, 00:38:20.534 "data_offset": 0, 00:38:20.534 "data_size": 65536 00:38:20.534 } 00:38:20.534 ] 00:38:20.534 }' 00:38:20.534 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:20.534 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:20.534 23:20:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:20.534 23:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:20.534 23:20:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:21.499 "name": "raid_bdev1", 00:38:21.499 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:21.499 "strip_size_kb": 0, 00:38:21.499 "state": "online", 00:38:21.499 "raid_level": "raid1", 00:38:21.499 "superblock": false, 00:38:21.499 "num_base_bdevs": 4, 00:38:21.499 "num_base_bdevs_discovered": 3, 00:38:21.499 "num_base_bdevs_operational": 3, 00:38:21.499 "process": { 00:38:21.499 "type": "rebuild", 00:38:21.499 "target": "spare", 00:38:21.499 "progress": { 00:38:21.499 "blocks": 49152, 00:38:21.499 "percent": 75 00:38:21.499 } 00:38:21.499 }, 00:38:21.499 "base_bdevs_list": [ 00:38:21.499 { 00:38:21.499 "name": "spare", 00:38:21.499 "uuid": "18515c47-d8f9-5e94-9a63-d58d5704cb48", 00:38:21.499 "is_configured": true, 00:38:21.499 "data_offset": 0, 00:38:21.499 "data_size": 65536 00:38:21.499 }, 00:38:21.499 { 00:38:21.499 "name": null, 00:38:21.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:21.499 "is_configured": false, 00:38:21.499 "data_offset": 0, 00:38:21.499 "data_size": 65536 00:38:21.499 }, 00:38:21.499 { 00:38:21.499 "name": "BaseBdev3", 00:38:21.499 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:21.499 "is_configured": true, 00:38:21.499 "data_offset": 0, 00:38:21.499 "data_size": 65536 00:38:21.499 }, 00:38:21.499 { 00:38:21.499 "name": "BaseBdev4", 00:38:21.499 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:21.499 "is_configured": true, 00:38:21.499 "data_offset": 0, 00:38:21.499 "data_size": 65536 00:38:21.499 } 00:38:21.499 ] 00:38:21.499 }' 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:21.499 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:21.762 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:21.762 23:20:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:22.332 [2024-12-09 23:20:02.773596] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:22.332 [2024-12-09 23:20:02.773892] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:22.332 [2024-12-09 23:20:02.773959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:22.591 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:22.851 "name": "raid_bdev1", 00:38:22.851 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:22.851 "strip_size_kb": 0, 00:38:22.851 "state": "online", 00:38:22.851 "raid_level": "raid1", 00:38:22.851 "superblock": false, 00:38:22.851 "num_base_bdevs": 4, 00:38:22.851 "num_base_bdevs_discovered": 3, 00:38:22.851 "num_base_bdevs_operational": 3, 00:38:22.851 "base_bdevs_list": [ 00:38:22.851 { 00:38:22.851 "name": "spare", 00:38:22.851 "uuid": "18515c47-d8f9-5e94-9a63-d58d5704cb48", 00:38:22.851 "is_configured": true, 00:38:22.851 "data_offset": 0, 00:38:22.851 "data_size": 65536 00:38:22.851 }, 00:38:22.851 { 00:38:22.851 "name": null, 00:38:22.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:22.851 "is_configured": false, 00:38:22.851 "data_offset": 0, 00:38:22.851 "data_size": 65536 00:38:22.851 }, 00:38:22.851 { 00:38:22.851 "name": "BaseBdev3", 00:38:22.851 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:22.851 "is_configured": true, 00:38:22.851 "data_offset": 0, 00:38:22.851 "data_size": 65536 00:38:22.851 }, 00:38:22.851 { 00:38:22.851 "name": "BaseBdev4", 00:38:22.851 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:22.851 "is_configured": true, 00:38:22.851 "data_offset": 0, 00:38:22.851 "data_size": 65536 00:38:22.851 } 00:38:22.851 ] 00:38:22.851 }' 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:22.851 "name": "raid_bdev1", 00:38:22.851 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:22.851 "strip_size_kb": 0, 00:38:22.851 "state": "online", 00:38:22.851 "raid_level": "raid1", 00:38:22.851 "superblock": false, 00:38:22.851 "num_base_bdevs": 4, 00:38:22.851 "num_base_bdevs_discovered": 3, 00:38:22.851 "num_base_bdevs_operational": 3, 00:38:22.851 "base_bdevs_list": [ 00:38:22.851 { 00:38:22.851 "name": "spare", 00:38:22.851 "uuid": "18515c47-d8f9-5e94-9a63-d58d5704cb48", 00:38:22.851 "is_configured": true, 00:38:22.851 "data_offset": 0, 00:38:22.851 "data_size": 65536 00:38:22.851 }, 00:38:22.851 { 00:38:22.851 "name": null, 00:38:22.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:22.851 "is_configured": false, 00:38:22.851 "data_offset": 0, 00:38:22.851 "data_size": 65536 00:38:22.851 }, 00:38:22.851 { 00:38:22.851 "name": "BaseBdev3", 00:38:22.851 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:22.851 "is_configured": true, 00:38:22.851 "data_offset": 0, 00:38:22.851 "data_size": 65536 00:38:22.851 }, 00:38:22.851 { 00:38:22.851 "name": "BaseBdev4", 00:38:22.851 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:22.851 "is_configured": true, 00:38:22.851 "data_offset": 0, 00:38:22.851 "data_size": 65536 00:38:22.851 } 00:38:22.851 ] 00:38:22.851 }' 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:22.851 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:22.852 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:22.852 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:22.852 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.852 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:22.852 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:22.852 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.852 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:22.852 "name": "raid_bdev1", 00:38:22.852 "uuid": "70b758d1-1106-4687-a51e-c78378c1e2ca", 00:38:22.852 "strip_size_kb": 0, 00:38:22.852 "state": "online", 00:38:22.852 "raid_level": "raid1", 00:38:22.852 "superblock": false, 00:38:22.852 "num_base_bdevs": 4, 00:38:22.852 "num_base_bdevs_discovered": 3, 00:38:22.852 "num_base_bdevs_operational": 3, 00:38:22.852 "base_bdevs_list": [ 00:38:22.852 { 00:38:22.852 "name": "spare", 00:38:22.852 "uuid": "18515c47-d8f9-5e94-9a63-d58d5704cb48", 00:38:22.852 "is_configured": true, 00:38:22.852 "data_offset": 0, 00:38:22.852 "data_size": 65536 00:38:22.852 }, 00:38:22.852 { 00:38:22.852 "name": null, 00:38:22.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:22.852 "is_configured": false, 00:38:22.852 "data_offset": 0, 00:38:22.852 "data_size": 65536 00:38:22.852 }, 00:38:22.852 { 00:38:22.852 "name": "BaseBdev3", 00:38:22.852 "uuid": "8d0adb73-0fb2-52a1-9ef3-4295827c8ee0", 00:38:22.852 "is_configured": true, 00:38:22.852 "data_offset": 0, 00:38:22.852 "data_size": 65536 00:38:22.852 }, 00:38:22.852 { 00:38:22.852 "name": "BaseBdev4", 00:38:22.852 "uuid": "8aa8ecf1-6455-5b09-a2fc-7e356f280395", 00:38:22.852 "is_configured": true, 00:38:22.852 "data_offset": 0, 00:38:22.852 "data_size": 65536 00:38:22.852 } 00:38:22.852 ] 00:38:22.852 }' 00:38:22.852 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:22.852 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:23.419 [2024-12-09 23:20:03.871163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:23.419 [2024-12-09 23:20:03.871203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:23.419 [2024-12-09 23:20:03.871289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:23.419 [2024-12-09 23:20:03.871372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:23.419 [2024-12-09 23:20:03.871384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:23.419 23:20:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:23.677 /dev/nbd0 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:23.677 1+0 records in 00:38:23.677 1+0 records out 00:38:23.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294533 s, 13.9 MB/s 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:23.677 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:38:23.936 /dev/nbd1 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:23.936 1+0 records in 00:38:23.936 1+0 records out 00:38:23.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373263 s, 11.0 MB/s 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:38:23.936 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:23.937 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:23.937 23:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:38:24.195 23:20:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:38:24.195 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:24.195 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:24.195 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:24.195 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:38:24.195 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:24.196 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:24.454 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:24.455 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:24.455 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:24.455 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:24.455 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:24.455 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:24.455 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:24.455 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:24.455 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:24.455 23:20:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77429 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77429 ']' 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77429 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77429 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77429' 00:38:24.712 killing process with pid 77429 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77429 00:38:24.712 Received shutdown signal, test time was about 60.000000 seconds 00:38:24.712 00:38:24.712 Latency(us) 00:38:24.712 [2024-12-09T23:20:05.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.712 [2024-12-09T23:20:05.348Z] =================================================================================================================== 00:38:24.712 [2024-12-09T23:20:05.348Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:24.712 [2024-12-09 23:20:05.179738] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:24.712 23:20:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77429 00:38:25.301 [2024-12-09 23:20:05.677480] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:26.237 23:20:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:38:26.237 00:38:26.237 real 0m18.465s 00:38:26.237 user 0m20.060s 00:38:26.237 sys 0m4.047s 00:38:26.237 23:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.237 23:20:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:26.237 ************************************ 00:38:26.237 END TEST raid_rebuild_test 00:38:26.237 ************************************ 00:38:26.237 23:20:06 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:38:26.237 23:20:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:38:26.237 23:20:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.237 23:20:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:26.497 ************************************ 00:38:26.497 START TEST raid_rebuild_test_sb 00:38:26.497 ************************************ 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77885 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77885 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77885 ']' 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.497 23:20:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:26.497 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:26.497 Zero copy mechanism will not be used. 00:38:26.497 [2024-12-09 23:20:06.977471] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:26.497 [2024-12-09 23:20:06.977598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77885 ] 00:38:26.756 [2024-12-09 23:20:07.157297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.756 [2024-12-09 23:20:07.269257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.015 [2024-12-09 23:20:07.457514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:27.015 [2024-12-09 23:20:07.457587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.281 BaseBdev1_malloc 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.281 [2024-12-09 23:20:07.862469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:27.281 [2024-12-09 23:20:07.862541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:27.281 [2024-12-09 23:20:07.862566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:27.281 [2024-12-09 23:20:07.862582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:27.281 [2024-12-09 23:20:07.865020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:27.281 [2024-12-09 23:20:07.865070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:27.281 BaseBdev1 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.281 BaseBdev2_malloc 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.281 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.543 [2024-12-09 23:20:07.918939] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:27.543 [2024-12-09 23:20:07.919009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:27.543 [2024-12-09 23:20:07.919033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:27.543 [2024-12-09 23:20:07.919047] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:27.543 [2024-12-09 23:20:07.921406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:27.543 [2024-12-09 23:20:07.921447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:27.543 BaseBdev2 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.543 BaseBdev3_malloc 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.543 [2024-12-09 23:20:07.991385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:38:27.543 [2024-12-09 23:20:07.991484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:27.543 [2024-12-09 23:20:07.991513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:27.543 [2024-12-09 23:20:07.991528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:27.543 [2024-12-09 23:20:07.994017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:27.543 [2024-12-09 23:20:07.994058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:27.543 BaseBdev3 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.543 23:20:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.543 BaseBdev4_malloc 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.543 [2024-12-09 23:20:08.047456] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:38:27.543 [2024-12-09 23:20:08.047518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:27.543 [2024-12-09 23:20:08.047543] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:38:27.543 [2024-12-09 23:20:08.047558] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:27.543 [2024-12-09 23:20:08.049888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:27.543 [2024-12-09 23:20:08.049929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:27.543 BaseBdev4 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.543 spare_malloc 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.543 spare_delay 00:38:27.543 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.544 [2024-12-09 23:20:08.115022] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:27.544 [2024-12-09 23:20:08.115102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:27.544 [2024-12-09 23:20:08.115127] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:38:27.544 [2024-12-09 23:20:08.115142] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:27.544 [2024-12-09 23:20:08.117645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:27.544 [2024-12-09 23:20:08.117808] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:27.544 spare 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.544 [2024-12-09 23:20:08.127053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:27.544 [2024-12-09 23:20:08.129156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:27.544 [2024-12-09 23:20:08.129349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:27.544 [2024-12-09 23:20:08.129427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:27.544 [2024-12-09 23:20:08.129635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:38:27.544 [2024-12-09 23:20:08.129651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:38:27.544 [2024-12-09 23:20:08.129934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:27.544 [2024-12-09 23:20:08.130126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:38:27.544 [2024-12-09 23:20:08.130137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:38:27.544 [2024-12-09 23:20:08.130308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:27.544 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.803 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:27.803 "name": "raid_bdev1", 00:38:27.803 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:27.803 "strip_size_kb": 0, 00:38:27.803 "state": "online", 00:38:27.803 "raid_level": "raid1", 00:38:27.803 "superblock": true, 00:38:27.803 "num_base_bdevs": 4, 00:38:27.803 "num_base_bdevs_discovered": 4, 00:38:27.803 "num_base_bdevs_operational": 4, 00:38:27.803 "base_bdevs_list": [ 00:38:27.803 { 00:38:27.803 "name": "BaseBdev1", 00:38:27.803 "uuid": "1f482448-8264-5176-bf4a-b3afcb3b5148", 00:38:27.803 "is_configured": true, 00:38:27.803 "data_offset": 2048, 00:38:27.803 "data_size": 63488 00:38:27.803 }, 00:38:27.803 { 00:38:27.803 "name": "BaseBdev2", 00:38:27.803 "uuid": "ded07e6e-a29e-5822-9732-f0463d59146e", 00:38:27.803 "is_configured": true, 00:38:27.803 "data_offset": 2048, 00:38:27.803 "data_size": 63488 00:38:27.803 }, 00:38:27.803 { 00:38:27.803 "name": "BaseBdev3", 00:38:27.803 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:27.803 "is_configured": true, 00:38:27.803 "data_offset": 2048, 00:38:27.803 "data_size": 63488 00:38:27.803 }, 00:38:27.803 { 00:38:27.803 "name": "BaseBdev4", 00:38:27.803 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:27.803 "is_configured": true, 00:38:27.803 "data_offset": 2048, 00:38:27.803 "data_size": 63488 00:38:27.803 } 00:38:27.803 ] 00:38:27.803 }' 00:38:27.803 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:27.803 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:28.074 [2024-12-09 23:20:08.574888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:38:28.074 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:28.075 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:38:28.075 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:28.075 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:28.075 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:28.075 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:38:28.075 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:28.075 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:28.075 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:28.336 [2024-12-09 23:20:08.866473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:38:28.336 /dev/nbd0 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:28.337 1+0 records in 00:38:28.337 1+0 records out 00:38:28.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265791 s, 15.4 MB/s 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:38:28.337 23:20:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:38:34.981 63488+0 records in 00:38:34.981 63488+0 records out 00:38:34.981 32505856 bytes (33 MB, 31 MiB) copied, 5.76975 s, 5.6 MB/s 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:34.981 [2024-12-09 23:20:14.946912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:34.981 [2024-12-09 23:20:14.965417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.981 23:20:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:34.981 23:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.981 23:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:34.981 "name": "raid_bdev1", 00:38:34.981 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:34.981 "strip_size_kb": 0, 00:38:34.981 "state": "online", 00:38:34.981 "raid_level": "raid1", 00:38:34.981 "superblock": true, 00:38:34.981 "num_base_bdevs": 4, 00:38:34.981 "num_base_bdevs_discovered": 3, 00:38:34.981 "num_base_bdevs_operational": 3, 00:38:34.981 "base_bdevs_list": [ 00:38:34.981 { 00:38:34.981 "name": null, 00:38:34.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:34.981 "is_configured": false, 00:38:34.981 "data_offset": 0, 00:38:34.981 "data_size": 63488 00:38:34.981 }, 00:38:34.981 { 00:38:34.981 "name": "BaseBdev2", 00:38:34.981 "uuid": "ded07e6e-a29e-5822-9732-f0463d59146e", 00:38:34.981 "is_configured": true, 00:38:34.981 "data_offset": 2048, 00:38:34.981 "data_size": 63488 00:38:34.981 }, 00:38:34.981 { 00:38:34.981 "name": "BaseBdev3", 00:38:34.981 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:34.981 "is_configured": true, 00:38:34.981 "data_offset": 2048, 00:38:34.981 "data_size": 63488 00:38:34.981 }, 00:38:34.981 { 00:38:34.981 "name": "BaseBdev4", 00:38:34.981 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:34.981 "is_configured": true, 00:38:34.981 "data_offset": 2048, 00:38:34.981 "data_size": 63488 00:38:34.981 } 00:38:34.981 ] 00:38:34.981 }' 00:38:34.981 23:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:34.981 23:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:34.981 23:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:34.981 23:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.981 23:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:34.981 [2024-12-09 23:20:15.408762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:34.981 [2024-12-09 23:20:15.424886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:38:34.981 23:20:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.981 23:20:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:38:34.981 [2024-12-09 23:20:15.427421] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:35.919 "name": "raid_bdev1", 00:38:35.919 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:35.919 "strip_size_kb": 0, 00:38:35.919 "state": "online", 00:38:35.919 "raid_level": "raid1", 00:38:35.919 "superblock": true, 00:38:35.919 "num_base_bdevs": 4, 00:38:35.919 "num_base_bdevs_discovered": 4, 00:38:35.919 "num_base_bdevs_operational": 4, 00:38:35.919 "process": { 00:38:35.919 "type": "rebuild", 00:38:35.919 "target": "spare", 00:38:35.919 "progress": { 00:38:35.919 "blocks": 20480, 00:38:35.919 "percent": 32 00:38:35.919 } 00:38:35.919 }, 00:38:35.919 "base_bdevs_list": [ 00:38:35.919 { 00:38:35.919 "name": "spare", 00:38:35.919 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:35.919 "is_configured": true, 00:38:35.919 "data_offset": 2048, 00:38:35.919 "data_size": 63488 00:38:35.919 }, 00:38:35.919 { 00:38:35.919 "name": "BaseBdev2", 00:38:35.919 "uuid": "ded07e6e-a29e-5822-9732-f0463d59146e", 00:38:35.919 "is_configured": true, 00:38:35.919 "data_offset": 2048, 00:38:35.919 "data_size": 63488 00:38:35.919 }, 00:38:35.919 { 00:38:35.919 "name": "BaseBdev3", 00:38:35.919 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:35.919 "is_configured": true, 00:38:35.919 "data_offset": 2048, 00:38:35.919 "data_size": 63488 00:38:35.919 }, 00:38:35.919 { 00:38:35.919 "name": "BaseBdev4", 00:38:35.919 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:35.919 "is_configured": true, 00:38:35.919 "data_offset": 2048, 00:38:35.919 "data_size": 63488 00:38:35.919 } 00:38:35.919 ] 00:38:35.919 }' 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:35.919 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:36.178 [2024-12-09 23:20:16.579691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:36.178 [2024-12-09 23:20:16.633463] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:36.178 [2024-12-09 23:20:16.633739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:36.178 [2024-12-09 23:20:16.633768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:36.178 [2024-12-09 23:20:16.633783] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:36.178 "name": "raid_bdev1", 00:38:36.178 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:36.178 "strip_size_kb": 0, 00:38:36.178 "state": "online", 00:38:36.178 "raid_level": "raid1", 00:38:36.178 "superblock": true, 00:38:36.178 "num_base_bdevs": 4, 00:38:36.178 "num_base_bdevs_discovered": 3, 00:38:36.178 "num_base_bdevs_operational": 3, 00:38:36.178 "base_bdevs_list": [ 00:38:36.178 { 00:38:36.178 "name": null, 00:38:36.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:36.178 "is_configured": false, 00:38:36.178 "data_offset": 0, 00:38:36.178 "data_size": 63488 00:38:36.178 }, 00:38:36.178 { 00:38:36.178 "name": "BaseBdev2", 00:38:36.178 "uuid": "ded07e6e-a29e-5822-9732-f0463d59146e", 00:38:36.178 "is_configured": true, 00:38:36.178 "data_offset": 2048, 00:38:36.178 "data_size": 63488 00:38:36.178 }, 00:38:36.178 { 00:38:36.178 "name": "BaseBdev3", 00:38:36.178 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:36.178 "is_configured": true, 00:38:36.178 "data_offset": 2048, 00:38:36.178 "data_size": 63488 00:38:36.178 }, 00:38:36.178 { 00:38:36.178 "name": "BaseBdev4", 00:38:36.178 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:36.178 "is_configured": true, 00:38:36.178 "data_offset": 2048, 00:38:36.178 "data_size": 63488 00:38:36.178 } 00:38:36.178 ] 00:38:36.178 }' 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:36.178 23:20:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:36.435 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:36.435 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:36.435 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:36.435 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:36.435 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:36.693 "name": "raid_bdev1", 00:38:36.693 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:36.693 "strip_size_kb": 0, 00:38:36.693 "state": "online", 00:38:36.693 "raid_level": "raid1", 00:38:36.693 "superblock": true, 00:38:36.693 "num_base_bdevs": 4, 00:38:36.693 "num_base_bdevs_discovered": 3, 00:38:36.693 "num_base_bdevs_operational": 3, 00:38:36.693 "base_bdevs_list": [ 00:38:36.693 { 00:38:36.693 "name": null, 00:38:36.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:36.693 "is_configured": false, 00:38:36.693 "data_offset": 0, 00:38:36.693 "data_size": 63488 00:38:36.693 }, 00:38:36.693 { 00:38:36.693 "name": "BaseBdev2", 00:38:36.693 "uuid": "ded07e6e-a29e-5822-9732-f0463d59146e", 00:38:36.693 "is_configured": true, 00:38:36.693 "data_offset": 2048, 00:38:36.693 "data_size": 63488 00:38:36.693 }, 00:38:36.693 { 00:38:36.693 "name": "BaseBdev3", 00:38:36.693 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:36.693 "is_configured": true, 00:38:36.693 "data_offset": 2048, 00:38:36.693 "data_size": 63488 00:38:36.693 }, 00:38:36.693 { 00:38:36.693 "name": "BaseBdev4", 00:38:36.693 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:36.693 "is_configured": true, 00:38:36.693 "data_offset": 2048, 00:38:36.693 "data_size": 63488 00:38:36.693 } 00:38:36.693 ] 00:38:36.693 }' 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:36.693 [2024-12-09 23:20:17.203560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:36.693 [2024-12-09 23:20:17.218807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.693 23:20:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:38:36.693 [2024-12-09 23:20:17.221048] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:37.634 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:37.634 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:37.634 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:37.634 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:37.635 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:37.635 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:37.635 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:37.635 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.635 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:37.635 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:37.896 "name": "raid_bdev1", 00:38:37.896 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:37.896 "strip_size_kb": 0, 00:38:37.896 "state": "online", 00:38:37.896 "raid_level": "raid1", 00:38:37.896 "superblock": true, 00:38:37.896 "num_base_bdevs": 4, 00:38:37.896 "num_base_bdevs_discovered": 4, 00:38:37.896 "num_base_bdevs_operational": 4, 00:38:37.896 "process": { 00:38:37.896 "type": "rebuild", 00:38:37.896 "target": "spare", 00:38:37.896 "progress": { 00:38:37.896 "blocks": 20480, 00:38:37.896 "percent": 32 00:38:37.896 } 00:38:37.896 }, 00:38:37.896 "base_bdevs_list": [ 00:38:37.896 { 00:38:37.896 "name": "spare", 00:38:37.896 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:37.896 "is_configured": true, 00:38:37.896 "data_offset": 2048, 00:38:37.896 "data_size": 63488 00:38:37.896 }, 00:38:37.896 { 00:38:37.896 "name": "BaseBdev2", 00:38:37.896 "uuid": "ded07e6e-a29e-5822-9732-f0463d59146e", 00:38:37.896 "is_configured": true, 00:38:37.896 "data_offset": 2048, 00:38:37.896 "data_size": 63488 00:38:37.896 }, 00:38:37.896 { 00:38:37.896 "name": "BaseBdev3", 00:38:37.896 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:37.896 "is_configured": true, 00:38:37.896 "data_offset": 2048, 00:38:37.896 "data_size": 63488 00:38:37.896 }, 00:38:37.896 { 00:38:37.896 "name": "BaseBdev4", 00:38:37.896 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:37.896 "is_configured": true, 00:38:37.896 "data_offset": 2048, 00:38:37.896 "data_size": 63488 00:38:37.896 } 00:38:37.896 ] 00:38:37.896 }' 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:38:37.896 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.896 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:37.896 [2024-12-09 23:20:18.368026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:37.896 [2024-12-09 23:20:18.526658] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:38:38.156 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.156 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:38.157 "name": "raid_bdev1", 00:38:38.157 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:38.157 "strip_size_kb": 0, 00:38:38.157 "state": "online", 00:38:38.157 "raid_level": "raid1", 00:38:38.157 "superblock": true, 00:38:38.157 "num_base_bdevs": 4, 00:38:38.157 "num_base_bdevs_discovered": 3, 00:38:38.157 "num_base_bdevs_operational": 3, 00:38:38.157 "process": { 00:38:38.157 "type": "rebuild", 00:38:38.157 "target": "spare", 00:38:38.157 "progress": { 00:38:38.157 "blocks": 24576, 00:38:38.157 "percent": 38 00:38:38.157 } 00:38:38.157 }, 00:38:38.157 "base_bdevs_list": [ 00:38:38.157 { 00:38:38.157 "name": "spare", 00:38:38.157 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:38.157 "is_configured": true, 00:38:38.157 "data_offset": 2048, 00:38:38.157 "data_size": 63488 00:38:38.157 }, 00:38:38.157 { 00:38:38.157 "name": null, 00:38:38.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:38.157 "is_configured": false, 00:38:38.157 "data_offset": 0, 00:38:38.157 "data_size": 63488 00:38:38.157 }, 00:38:38.157 { 00:38:38.157 "name": "BaseBdev3", 00:38:38.157 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:38.157 "is_configured": true, 00:38:38.157 "data_offset": 2048, 00:38:38.157 "data_size": 63488 00:38:38.157 }, 00:38:38.157 { 00:38:38.157 "name": "BaseBdev4", 00:38:38.157 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:38.157 "is_configured": true, 00:38:38.157 "data_offset": 2048, 00:38:38.157 "data_size": 63488 00:38:38.157 } 00:38:38.157 ] 00:38:38.157 }' 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:38.157 "name": "raid_bdev1", 00:38:38.157 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:38.157 "strip_size_kb": 0, 00:38:38.157 "state": "online", 00:38:38.157 "raid_level": "raid1", 00:38:38.157 "superblock": true, 00:38:38.157 "num_base_bdevs": 4, 00:38:38.157 "num_base_bdevs_discovered": 3, 00:38:38.157 "num_base_bdevs_operational": 3, 00:38:38.157 "process": { 00:38:38.157 "type": "rebuild", 00:38:38.157 "target": "spare", 00:38:38.157 "progress": { 00:38:38.157 "blocks": 26624, 00:38:38.157 "percent": 41 00:38:38.157 } 00:38:38.157 }, 00:38:38.157 "base_bdevs_list": [ 00:38:38.157 { 00:38:38.157 "name": "spare", 00:38:38.157 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:38.157 "is_configured": true, 00:38:38.157 "data_offset": 2048, 00:38:38.157 "data_size": 63488 00:38:38.157 }, 00:38:38.157 { 00:38:38.157 "name": null, 00:38:38.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:38.157 "is_configured": false, 00:38:38.157 "data_offset": 0, 00:38:38.157 "data_size": 63488 00:38:38.157 }, 00:38:38.157 { 00:38:38.157 "name": "BaseBdev3", 00:38:38.157 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:38.157 "is_configured": true, 00:38:38.157 "data_offset": 2048, 00:38:38.157 "data_size": 63488 00:38:38.157 }, 00:38:38.157 { 00:38:38.157 "name": "BaseBdev4", 00:38:38.157 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:38.157 "is_configured": true, 00:38:38.157 "data_offset": 2048, 00:38:38.157 "data_size": 63488 00:38:38.157 } 00:38:38.157 ] 00:38:38.157 }' 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:38.157 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:38.417 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:38.417 23:20:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:39.360 "name": "raid_bdev1", 00:38:39.360 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:39.360 "strip_size_kb": 0, 00:38:39.360 "state": "online", 00:38:39.360 "raid_level": "raid1", 00:38:39.360 "superblock": true, 00:38:39.360 "num_base_bdevs": 4, 00:38:39.360 "num_base_bdevs_discovered": 3, 00:38:39.360 "num_base_bdevs_operational": 3, 00:38:39.360 "process": { 00:38:39.360 "type": "rebuild", 00:38:39.360 "target": "spare", 00:38:39.360 "progress": { 00:38:39.360 "blocks": 49152, 00:38:39.360 "percent": 77 00:38:39.360 } 00:38:39.360 }, 00:38:39.360 "base_bdevs_list": [ 00:38:39.360 { 00:38:39.360 "name": "spare", 00:38:39.360 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:39.360 "is_configured": true, 00:38:39.360 "data_offset": 2048, 00:38:39.360 "data_size": 63488 00:38:39.360 }, 00:38:39.360 { 00:38:39.360 "name": null, 00:38:39.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:39.360 "is_configured": false, 00:38:39.360 "data_offset": 0, 00:38:39.360 "data_size": 63488 00:38:39.360 }, 00:38:39.360 { 00:38:39.360 "name": "BaseBdev3", 00:38:39.360 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:39.360 "is_configured": true, 00:38:39.360 "data_offset": 2048, 00:38:39.360 "data_size": 63488 00:38:39.360 }, 00:38:39.360 { 00:38:39.360 "name": "BaseBdev4", 00:38:39.360 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:39.360 "is_configured": true, 00:38:39.360 "data_offset": 2048, 00:38:39.360 "data_size": 63488 00:38:39.360 } 00:38:39.360 ] 00:38:39.360 }' 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:39.360 23:20:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:39.931 [2024-12-09 23:20:20.435654] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:39.931 [2024-12-09 23:20:20.435755] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:39.931 [2024-12-09 23:20:20.435905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:40.498 "name": "raid_bdev1", 00:38:40.498 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:40.498 "strip_size_kb": 0, 00:38:40.498 "state": "online", 00:38:40.498 "raid_level": "raid1", 00:38:40.498 "superblock": true, 00:38:40.498 "num_base_bdevs": 4, 00:38:40.498 "num_base_bdevs_discovered": 3, 00:38:40.498 "num_base_bdevs_operational": 3, 00:38:40.498 "base_bdevs_list": [ 00:38:40.498 { 00:38:40.498 "name": "spare", 00:38:40.498 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:40.498 "is_configured": true, 00:38:40.498 "data_offset": 2048, 00:38:40.498 "data_size": 63488 00:38:40.498 }, 00:38:40.498 { 00:38:40.498 "name": null, 00:38:40.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:40.498 "is_configured": false, 00:38:40.498 "data_offset": 0, 00:38:40.498 "data_size": 63488 00:38:40.498 }, 00:38:40.498 { 00:38:40.498 "name": "BaseBdev3", 00:38:40.498 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:40.498 "is_configured": true, 00:38:40.498 "data_offset": 2048, 00:38:40.498 "data_size": 63488 00:38:40.498 }, 00:38:40.498 { 00:38:40.498 "name": "BaseBdev4", 00:38:40.498 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:40.498 "is_configured": true, 00:38:40.498 "data_offset": 2048, 00:38:40.498 "data_size": 63488 00:38:40.498 } 00:38:40.498 ] 00:38:40.498 }' 00:38:40.498 23:20:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.498 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:40.498 "name": "raid_bdev1", 00:38:40.498 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:40.498 "strip_size_kb": 0, 00:38:40.498 "state": "online", 00:38:40.498 "raid_level": "raid1", 00:38:40.498 "superblock": true, 00:38:40.498 "num_base_bdevs": 4, 00:38:40.498 "num_base_bdevs_discovered": 3, 00:38:40.498 "num_base_bdevs_operational": 3, 00:38:40.498 "base_bdevs_list": [ 00:38:40.498 { 00:38:40.498 "name": "spare", 00:38:40.498 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:40.498 "is_configured": true, 00:38:40.499 "data_offset": 2048, 00:38:40.499 "data_size": 63488 00:38:40.499 }, 00:38:40.499 { 00:38:40.499 "name": null, 00:38:40.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:40.499 "is_configured": false, 00:38:40.499 "data_offset": 0, 00:38:40.499 "data_size": 63488 00:38:40.499 }, 00:38:40.499 { 00:38:40.499 "name": "BaseBdev3", 00:38:40.499 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:40.499 "is_configured": true, 00:38:40.499 "data_offset": 2048, 00:38:40.499 "data_size": 63488 00:38:40.499 }, 00:38:40.499 { 00:38:40.499 "name": "BaseBdev4", 00:38:40.499 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:40.499 "is_configured": true, 00:38:40.499 "data_offset": 2048, 00:38:40.499 "data_size": 63488 00:38:40.499 } 00:38:40.499 ] 00:38:40.499 }' 00:38:40.499 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.765 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:40.765 "name": "raid_bdev1", 00:38:40.765 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:40.765 "strip_size_kb": 0, 00:38:40.765 "state": "online", 00:38:40.765 "raid_level": "raid1", 00:38:40.765 "superblock": true, 00:38:40.765 "num_base_bdevs": 4, 00:38:40.765 "num_base_bdevs_discovered": 3, 00:38:40.765 "num_base_bdevs_operational": 3, 00:38:40.765 "base_bdevs_list": [ 00:38:40.765 { 00:38:40.765 "name": "spare", 00:38:40.765 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:40.765 "is_configured": true, 00:38:40.765 "data_offset": 2048, 00:38:40.765 "data_size": 63488 00:38:40.765 }, 00:38:40.765 { 00:38:40.765 "name": null, 00:38:40.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:40.765 "is_configured": false, 00:38:40.765 "data_offset": 0, 00:38:40.765 "data_size": 63488 00:38:40.765 }, 00:38:40.765 { 00:38:40.765 "name": "BaseBdev3", 00:38:40.765 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:40.765 "is_configured": true, 00:38:40.765 "data_offset": 2048, 00:38:40.765 "data_size": 63488 00:38:40.766 }, 00:38:40.766 { 00:38:40.766 "name": "BaseBdev4", 00:38:40.766 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:40.766 "is_configured": true, 00:38:40.766 "data_offset": 2048, 00:38:40.766 "data_size": 63488 00:38:40.766 } 00:38:40.766 ] 00:38:40.766 }' 00:38:40.766 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:40.766 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:41.026 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:41.026 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.026 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:41.026 [2024-12-09 23:20:21.616588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:41.026 [2024-12-09 23:20:21.616628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:41.026 [2024-12-09 23:20:21.616717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:41.026 [2024-12-09 23:20:21.616797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:41.026 [2024-12-09 23:20:21.616809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:38:41.026 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.026 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:41.026 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:38:41.026 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.026 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:41.026 23:20:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:41.286 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:41.546 /dev/nbd0 00:38:41.546 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:41.546 23:20:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:41.546 1+0 records in 00:38:41.546 1+0 records out 00:38:41.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059106 s, 6.9 MB/s 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:41.546 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:38:41.805 /dev/nbd1 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:41.805 1+0 records in 00:38:41.805 1+0 records out 00:38:41.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411132 s, 10.0 MB/s 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:41.805 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:38:42.065 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:38:42.065 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:42.065 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:42.065 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:42.065 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:38:42.065 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:42.065 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:38:42.323 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.324 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.582 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.582 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:42.582 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.582 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.582 [2024-12-09 23:20:22.968051] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:42.582 [2024-12-09 23:20:22.968115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:42.582 [2024-12-09 23:20:22.968143] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:38:42.582 [2024-12-09 23:20:22.968155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:42.582 [2024-12-09 23:20:22.970682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:42.582 [2024-12-09 23:20:22.970725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:42.582 [2024-12-09 23:20:22.970831] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:42.582 [2024-12-09 23:20:22.970880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:42.582 [2024-12-09 23:20:22.971028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:42.583 [2024-12-09 23:20:22.971122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:42.583 spare 00:38:42.583 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.583 23:20:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:38:42.583 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.583 23:20:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.583 [2024-12-09 23:20:23.071059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:38:42.583 [2024-12-09 23:20:23.071105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:38:42.583 [2024-12-09 23:20:23.071490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:38:42.583 [2024-12-09 23:20:23.071703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:38:42.583 [2024-12-09 23:20:23.071719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:38:42.583 [2024-12-09 23:20:23.071941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:42.583 "name": "raid_bdev1", 00:38:42.583 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:42.583 "strip_size_kb": 0, 00:38:42.583 "state": "online", 00:38:42.583 "raid_level": "raid1", 00:38:42.583 "superblock": true, 00:38:42.583 "num_base_bdevs": 4, 00:38:42.583 "num_base_bdevs_discovered": 3, 00:38:42.583 "num_base_bdevs_operational": 3, 00:38:42.583 "base_bdevs_list": [ 00:38:42.583 { 00:38:42.583 "name": "spare", 00:38:42.583 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:42.583 "is_configured": true, 00:38:42.583 "data_offset": 2048, 00:38:42.583 "data_size": 63488 00:38:42.583 }, 00:38:42.583 { 00:38:42.583 "name": null, 00:38:42.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.583 "is_configured": false, 00:38:42.583 "data_offset": 2048, 00:38:42.583 "data_size": 63488 00:38:42.583 }, 00:38:42.583 { 00:38:42.583 "name": "BaseBdev3", 00:38:42.583 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:42.583 "is_configured": true, 00:38:42.583 "data_offset": 2048, 00:38:42.583 "data_size": 63488 00:38:42.583 }, 00:38:42.583 { 00:38:42.583 "name": "BaseBdev4", 00:38:42.583 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:42.583 "is_configured": true, 00:38:42.583 "data_offset": 2048, 00:38:42.583 "data_size": 63488 00:38:42.583 } 00:38:42.583 ] 00:38:42.583 }' 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:42.583 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:43.151 "name": "raid_bdev1", 00:38:43.151 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:43.151 "strip_size_kb": 0, 00:38:43.151 "state": "online", 00:38:43.151 "raid_level": "raid1", 00:38:43.151 "superblock": true, 00:38:43.151 "num_base_bdevs": 4, 00:38:43.151 "num_base_bdevs_discovered": 3, 00:38:43.151 "num_base_bdevs_operational": 3, 00:38:43.151 "base_bdevs_list": [ 00:38:43.151 { 00:38:43.151 "name": "spare", 00:38:43.151 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:43.151 "is_configured": true, 00:38:43.151 "data_offset": 2048, 00:38:43.151 "data_size": 63488 00:38:43.151 }, 00:38:43.151 { 00:38:43.151 "name": null, 00:38:43.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.151 "is_configured": false, 00:38:43.151 "data_offset": 2048, 00:38:43.151 "data_size": 63488 00:38:43.151 }, 00:38:43.151 { 00:38:43.151 "name": "BaseBdev3", 00:38:43.151 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:43.151 "is_configured": true, 00:38:43.151 "data_offset": 2048, 00:38:43.151 "data_size": 63488 00:38:43.151 }, 00:38:43.151 { 00:38:43.151 "name": "BaseBdev4", 00:38:43.151 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:43.151 "is_configured": true, 00:38:43.151 "data_offset": 2048, 00:38:43.151 "data_size": 63488 00:38:43.151 } 00:38:43.151 ] 00:38:43.151 }' 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:43.151 [2024-12-09 23:20:23.691142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.151 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:43.151 "name": "raid_bdev1", 00:38:43.151 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:43.151 "strip_size_kb": 0, 00:38:43.151 "state": "online", 00:38:43.151 "raid_level": "raid1", 00:38:43.151 "superblock": true, 00:38:43.151 "num_base_bdevs": 4, 00:38:43.151 "num_base_bdevs_discovered": 2, 00:38:43.151 "num_base_bdevs_operational": 2, 00:38:43.151 "base_bdevs_list": [ 00:38:43.151 { 00:38:43.151 "name": null, 00:38:43.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.151 "is_configured": false, 00:38:43.151 "data_offset": 0, 00:38:43.151 "data_size": 63488 00:38:43.151 }, 00:38:43.151 { 00:38:43.152 "name": null, 00:38:43.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.152 "is_configured": false, 00:38:43.152 "data_offset": 2048, 00:38:43.152 "data_size": 63488 00:38:43.152 }, 00:38:43.152 { 00:38:43.152 "name": "BaseBdev3", 00:38:43.152 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:43.152 "is_configured": true, 00:38:43.152 "data_offset": 2048, 00:38:43.152 "data_size": 63488 00:38:43.152 }, 00:38:43.152 { 00:38:43.152 "name": "BaseBdev4", 00:38:43.152 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:43.152 "is_configured": true, 00:38:43.152 "data_offset": 2048, 00:38:43.152 "data_size": 63488 00:38:43.152 } 00:38:43.152 ] 00:38:43.152 }' 00:38:43.152 23:20:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:43.152 23:20:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:43.718 23:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:43.718 23:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.718 23:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:43.718 [2024-12-09 23:20:24.110556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:43.718 [2024-12-09 23:20:24.110768] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:38:43.718 [2024-12-09 23:20:24.110786] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:43.718 [2024-12-09 23:20:24.110832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:43.718 [2024-12-09 23:20:24.125504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:38:43.718 23:20:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.718 23:20:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:38:43.718 [2024-12-09 23:20:24.127655] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.654 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:44.654 "name": "raid_bdev1", 00:38:44.654 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:44.654 "strip_size_kb": 0, 00:38:44.654 "state": "online", 00:38:44.654 "raid_level": "raid1", 00:38:44.654 "superblock": true, 00:38:44.654 "num_base_bdevs": 4, 00:38:44.654 "num_base_bdevs_discovered": 3, 00:38:44.654 "num_base_bdevs_operational": 3, 00:38:44.654 "process": { 00:38:44.654 "type": "rebuild", 00:38:44.654 "target": "spare", 00:38:44.654 "progress": { 00:38:44.654 "blocks": 20480, 00:38:44.654 "percent": 32 00:38:44.654 } 00:38:44.655 }, 00:38:44.655 "base_bdevs_list": [ 00:38:44.655 { 00:38:44.655 "name": "spare", 00:38:44.655 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:44.655 "is_configured": true, 00:38:44.655 "data_offset": 2048, 00:38:44.655 "data_size": 63488 00:38:44.655 }, 00:38:44.655 { 00:38:44.655 "name": null, 00:38:44.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.655 "is_configured": false, 00:38:44.655 "data_offset": 2048, 00:38:44.655 "data_size": 63488 00:38:44.655 }, 00:38:44.655 { 00:38:44.655 "name": "BaseBdev3", 00:38:44.655 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:44.655 "is_configured": true, 00:38:44.655 "data_offset": 2048, 00:38:44.655 "data_size": 63488 00:38:44.655 }, 00:38:44.655 { 00:38:44.655 "name": "BaseBdev4", 00:38:44.655 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:44.655 "is_configured": true, 00:38:44.655 "data_offset": 2048, 00:38:44.655 "data_size": 63488 00:38:44.655 } 00:38:44.655 ] 00:38:44.655 }' 00:38:44.655 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:44.655 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:44.655 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:44.655 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:44.655 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:38:44.655 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.655 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:44.655 [2024-12-09 23:20:25.279608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:44.914 [2024-12-09 23:20:25.333422] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:44.914 [2024-12-09 23:20:25.333495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:44.914 [2024-12-09 23:20:25.333516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:44.914 [2024-12-09 23:20:25.333526] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:44.914 "name": "raid_bdev1", 00:38:44.914 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:44.914 "strip_size_kb": 0, 00:38:44.914 "state": "online", 00:38:44.914 "raid_level": "raid1", 00:38:44.914 "superblock": true, 00:38:44.914 "num_base_bdevs": 4, 00:38:44.914 "num_base_bdevs_discovered": 2, 00:38:44.914 "num_base_bdevs_operational": 2, 00:38:44.914 "base_bdevs_list": [ 00:38:44.914 { 00:38:44.914 "name": null, 00:38:44.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.914 "is_configured": false, 00:38:44.914 "data_offset": 0, 00:38:44.914 "data_size": 63488 00:38:44.914 }, 00:38:44.914 { 00:38:44.914 "name": null, 00:38:44.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.914 "is_configured": false, 00:38:44.914 "data_offset": 2048, 00:38:44.914 "data_size": 63488 00:38:44.914 }, 00:38:44.914 { 00:38:44.914 "name": "BaseBdev3", 00:38:44.914 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:44.914 "is_configured": true, 00:38:44.914 "data_offset": 2048, 00:38:44.914 "data_size": 63488 00:38:44.914 }, 00:38:44.914 { 00:38:44.914 "name": "BaseBdev4", 00:38:44.914 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:44.914 "is_configured": true, 00:38:44.914 "data_offset": 2048, 00:38:44.914 "data_size": 63488 00:38:44.914 } 00:38:44.914 ] 00:38:44.914 }' 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:44.914 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:45.172 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:45.172 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.172 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:45.172 [2024-12-09 23:20:25.776297] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:45.172 [2024-12-09 23:20:25.776365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:45.173 [2024-12-09 23:20:25.776408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:38:45.173 [2024-12-09 23:20:25.776421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:45.173 [2024-12-09 23:20:25.776911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:45.173 [2024-12-09 23:20:25.776933] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:45.173 [2024-12-09 23:20:25.777036] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:45.173 [2024-12-09 23:20:25.777051] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:38:45.173 [2024-12-09 23:20:25.777068] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:45.173 [2024-12-09 23:20:25.777094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:45.173 [2024-12-09 23:20:25.791349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:38:45.173 spare 00:38:45.173 23:20:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.173 23:20:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:38:45.173 [2024-12-09 23:20:25.793477] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:46.550 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:46.550 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:46.550 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:46.551 "name": "raid_bdev1", 00:38:46.551 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:46.551 "strip_size_kb": 0, 00:38:46.551 "state": "online", 00:38:46.551 "raid_level": "raid1", 00:38:46.551 "superblock": true, 00:38:46.551 "num_base_bdevs": 4, 00:38:46.551 "num_base_bdevs_discovered": 3, 00:38:46.551 "num_base_bdevs_operational": 3, 00:38:46.551 "process": { 00:38:46.551 "type": "rebuild", 00:38:46.551 "target": "spare", 00:38:46.551 "progress": { 00:38:46.551 "blocks": 20480, 00:38:46.551 "percent": 32 00:38:46.551 } 00:38:46.551 }, 00:38:46.551 "base_bdevs_list": [ 00:38:46.551 { 00:38:46.551 "name": "spare", 00:38:46.551 "uuid": "030373b3-d367-5311-b652-4a2555e3fd4f", 00:38:46.551 "is_configured": true, 00:38:46.551 "data_offset": 2048, 00:38:46.551 "data_size": 63488 00:38:46.551 }, 00:38:46.551 { 00:38:46.551 "name": null, 00:38:46.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:46.551 "is_configured": false, 00:38:46.551 "data_offset": 2048, 00:38:46.551 "data_size": 63488 00:38:46.551 }, 00:38:46.551 { 00:38:46.551 "name": "BaseBdev3", 00:38:46.551 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:46.551 "is_configured": true, 00:38:46.551 "data_offset": 2048, 00:38:46.551 "data_size": 63488 00:38:46.551 }, 00:38:46.551 { 00:38:46.551 "name": "BaseBdev4", 00:38:46.551 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:46.551 "is_configured": true, 00:38:46.551 "data_offset": 2048, 00:38:46.551 "data_size": 63488 00:38:46.551 } 00:38:46.551 ] 00:38:46.551 }' 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.551 23:20:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:46.551 [2024-12-09 23:20:26.921660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:46.551 [2024-12-09 23:20:26.999276] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:46.551 [2024-12-09 23:20:26.999373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:46.551 [2024-12-09 23:20:26.999405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:46.551 [2024-12-09 23:20:26.999419] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:46.551 "name": "raid_bdev1", 00:38:46.551 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:46.551 "strip_size_kb": 0, 00:38:46.551 "state": "online", 00:38:46.551 "raid_level": "raid1", 00:38:46.551 "superblock": true, 00:38:46.551 "num_base_bdevs": 4, 00:38:46.551 "num_base_bdevs_discovered": 2, 00:38:46.551 "num_base_bdevs_operational": 2, 00:38:46.551 "base_bdevs_list": [ 00:38:46.551 { 00:38:46.551 "name": null, 00:38:46.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:46.551 "is_configured": false, 00:38:46.551 "data_offset": 0, 00:38:46.551 "data_size": 63488 00:38:46.551 }, 00:38:46.551 { 00:38:46.551 "name": null, 00:38:46.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:46.551 "is_configured": false, 00:38:46.551 "data_offset": 2048, 00:38:46.551 "data_size": 63488 00:38:46.551 }, 00:38:46.551 { 00:38:46.551 "name": "BaseBdev3", 00:38:46.551 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:46.551 "is_configured": true, 00:38:46.551 "data_offset": 2048, 00:38:46.551 "data_size": 63488 00:38:46.551 }, 00:38:46.551 { 00:38:46.551 "name": "BaseBdev4", 00:38:46.551 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:46.551 "is_configured": true, 00:38:46.551 "data_offset": 2048, 00:38:46.551 "data_size": 63488 00:38:46.551 } 00:38:46.551 ] 00:38:46.551 }' 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:46.551 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:46.810 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:46.810 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:46.810 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:46.810 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:46.810 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:47.070 "name": "raid_bdev1", 00:38:47.070 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:47.070 "strip_size_kb": 0, 00:38:47.070 "state": "online", 00:38:47.070 "raid_level": "raid1", 00:38:47.070 "superblock": true, 00:38:47.070 "num_base_bdevs": 4, 00:38:47.070 "num_base_bdevs_discovered": 2, 00:38:47.070 "num_base_bdevs_operational": 2, 00:38:47.070 "base_bdevs_list": [ 00:38:47.070 { 00:38:47.070 "name": null, 00:38:47.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.070 "is_configured": false, 00:38:47.070 "data_offset": 0, 00:38:47.070 "data_size": 63488 00:38:47.070 }, 00:38:47.070 { 00:38:47.070 "name": null, 00:38:47.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.070 "is_configured": false, 00:38:47.070 "data_offset": 2048, 00:38:47.070 "data_size": 63488 00:38:47.070 }, 00:38:47.070 { 00:38:47.070 "name": "BaseBdev3", 00:38:47.070 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:47.070 "is_configured": true, 00:38:47.070 "data_offset": 2048, 00:38:47.070 "data_size": 63488 00:38:47.070 }, 00:38:47.070 { 00:38:47.070 "name": "BaseBdev4", 00:38:47.070 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:47.070 "is_configured": true, 00:38:47.070 "data_offset": 2048, 00:38:47.070 "data_size": 63488 00:38:47.070 } 00:38:47.070 ] 00:38:47.070 }' 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:47.070 [2024-12-09 23:20:27.596300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:47.070 [2024-12-09 23:20:27.596369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:47.070 [2024-12-09 23:20:27.596404] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:38:47.070 [2024-12-09 23:20:27.596419] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:47.070 [2024-12-09 23:20:27.596879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:47.070 [2024-12-09 23:20:27.596902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:47.070 [2024-12-09 23:20:27.596987] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:47.070 [2024-12-09 23:20:27.597006] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:38:47.070 [2024-12-09 23:20:27.597016] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:47.070 [2024-12-09 23:20:27.597042] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:38:47.070 BaseBdev1 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.070 23:20:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:48.006 23:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:48.265 23:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.265 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:48.265 "name": "raid_bdev1", 00:38:48.265 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:48.265 "strip_size_kb": 0, 00:38:48.265 "state": "online", 00:38:48.265 "raid_level": "raid1", 00:38:48.265 "superblock": true, 00:38:48.265 "num_base_bdevs": 4, 00:38:48.265 "num_base_bdevs_discovered": 2, 00:38:48.265 "num_base_bdevs_operational": 2, 00:38:48.265 "base_bdevs_list": [ 00:38:48.265 { 00:38:48.265 "name": null, 00:38:48.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:48.265 "is_configured": false, 00:38:48.265 "data_offset": 0, 00:38:48.265 "data_size": 63488 00:38:48.265 }, 00:38:48.265 { 00:38:48.265 "name": null, 00:38:48.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:48.265 "is_configured": false, 00:38:48.265 "data_offset": 2048, 00:38:48.265 "data_size": 63488 00:38:48.265 }, 00:38:48.265 { 00:38:48.265 "name": "BaseBdev3", 00:38:48.265 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:48.265 "is_configured": true, 00:38:48.265 "data_offset": 2048, 00:38:48.265 "data_size": 63488 00:38:48.265 }, 00:38:48.265 { 00:38:48.265 "name": "BaseBdev4", 00:38:48.265 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:48.265 "is_configured": true, 00:38:48.265 "data_offset": 2048, 00:38:48.265 "data_size": 63488 00:38:48.265 } 00:38:48.265 ] 00:38:48.265 }' 00:38:48.265 23:20:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:48.266 23:20:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:48.525 "name": "raid_bdev1", 00:38:48.525 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:48.525 "strip_size_kb": 0, 00:38:48.525 "state": "online", 00:38:48.525 "raid_level": "raid1", 00:38:48.525 "superblock": true, 00:38:48.525 "num_base_bdevs": 4, 00:38:48.525 "num_base_bdevs_discovered": 2, 00:38:48.525 "num_base_bdevs_operational": 2, 00:38:48.525 "base_bdevs_list": [ 00:38:48.525 { 00:38:48.525 "name": null, 00:38:48.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:48.525 "is_configured": false, 00:38:48.525 "data_offset": 0, 00:38:48.525 "data_size": 63488 00:38:48.525 }, 00:38:48.525 { 00:38:48.525 "name": null, 00:38:48.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:48.525 "is_configured": false, 00:38:48.525 "data_offset": 2048, 00:38:48.525 "data_size": 63488 00:38:48.525 }, 00:38:48.525 { 00:38:48.525 "name": "BaseBdev3", 00:38:48.525 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:48.525 "is_configured": true, 00:38:48.525 "data_offset": 2048, 00:38:48.525 "data_size": 63488 00:38:48.525 }, 00:38:48.525 { 00:38:48.525 "name": "BaseBdev4", 00:38:48.525 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:48.525 "is_configured": true, 00:38:48.525 "data_offset": 2048, 00:38:48.525 "data_size": 63488 00:38:48.525 } 00:38:48.525 ] 00:38:48.525 }' 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:48.525 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:48.784 [2024-12-09 23:20:29.174502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:48.784 [2024-12-09 23:20:29.174726] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:38:48.784 [2024-12-09 23:20:29.174754] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:48.784 request: 00:38:48.784 { 00:38:48.784 "base_bdev": "BaseBdev1", 00:38:48.784 "raid_bdev": "raid_bdev1", 00:38:48.784 "method": "bdev_raid_add_base_bdev", 00:38:48.784 "req_id": 1 00:38:48.784 } 00:38:48.784 Got JSON-RPC error response 00:38:48.784 response: 00:38:48.784 { 00:38:48.784 "code": -22, 00:38:48.784 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:48.784 } 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:48.784 23:20:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.729 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:49.729 "name": "raid_bdev1", 00:38:49.729 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:49.729 "strip_size_kb": 0, 00:38:49.729 "state": "online", 00:38:49.729 "raid_level": "raid1", 00:38:49.729 "superblock": true, 00:38:49.729 "num_base_bdevs": 4, 00:38:49.729 "num_base_bdevs_discovered": 2, 00:38:49.729 "num_base_bdevs_operational": 2, 00:38:49.729 "base_bdevs_list": [ 00:38:49.729 { 00:38:49.729 "name": null, 00:38:49.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:49.729 "is_configured": false, 00:38:49.729 "data_offset": 0, 00:38:49.729 "data_size": 63488 00:38:49.729 }, 00:38:49.729 { 00:38:49.729 "name": null, 00:38:49.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:49.730 "is_configured": false, 00:38:49.730 "data_offset": 2048, 00:38:49.730 "data_size": 63488 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "name": "BaseBdev3", 00:38:49.730 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:49.730 "is_configured": true, 00:38:49.730 "data_offset": 2048, 00:38:49.730 "data_size": 63488 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "name": "BaseBdev4", 00:38:49.730 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:49.730 "is_configured": true, 00:38:49.730 "data_offset": 2048, 00:38:49.730 "data_size": 63488 00:38:49.730 } 00:38:49.730 ] 00:38:49.730 }' 00:38:49.730 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:49.730 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:49.988 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:49.988 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:49.988 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:49.988 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:49.988 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:49.988 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:49.988 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:49.988 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.988 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:50.248 "name": "raid_bdev1", 00:38:50.248 "uuid": "ffb406aa-1489-4d9d-8263-66f5f431cbea", 00:38:50.248 "strip_size_kb": 0, 00:38:50.248 "state": "online", 00:38:50.248 "raid_level": "raid1", 00:38:50.248 "superblock": true, 00:38:50.248 "num_base_bdevs": 4, 00:38:50.248 "num_base_bdevs_discovered": 2, 00:38:50.248 "num_base_bdevs_operational": 2, 00:38:50.248 "base_bdevs_list": [ 00:38:50.248 { 00:38:50.248 "name": null, 00:38:50.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:50.248 "is_configured": false, 00:38:50.248 "data_offset": 0, 00:38:50.248 "data_size": 63488 00:38:50.248 }, 00:38:50.248 { 00:38:50.248 "name": null, 00:38:50.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:50.248 "is_configured": false, 00:38:50.248 "data_offset": 2048, 00:38:50.248 "data_size": 63488 00:38:50.248 }, 00:38:50.248 { 00:38:50.248 "name": "BaseBdev3", 00:38:50.248 "uuid": "299e937a-2360-5a66-865f-142678ae8c6a", 00:38:50.248 "is_configured": true, 00:38:50.248 "data_offset": 2048, 00:38:50.248 "data_size": 63488 00:38:50.248 }, 00:38:50.248 { 00:38:50.248 "name": "BaseBdev4", 00:38:50.248 "uuid": "99f5f168-39aa-582d-a5ce-654a8378d8c9", 00:38:50.248 "is_configured": true, 00:38:50.248 "data_offset": 2048, 00:38:50.248 "data_size": 63488 00:38:50.248 } 00:38:50.248 ] 00:38:50.248 }' 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77885 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77885 ']' 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77885 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77885 00:38:50.248 killing process with pid 77885 00:38:50.248 Received shutdown signal, test time was about 60.000000 seconds 00:38:50.248 00:38:50.248 Latency(us) 00:38:50.248 [2024-12-09T23:20:30.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.248 [2024-12-09T23:20:30.884Z] =================================================================================================================== 00:38:50.248 [2024-12-09T23:20:30.884Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77885' 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77885 00:38:50.248 [2024-12-09 23:20:30.799301] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:50.248 23:20:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77885 00:38:50.248 [2024-12-09 23:20:30.799449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:50.248 [2024-12-09 23:20:30.799520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:50.248 [2024-12-09 23:20:30.799533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:38:50.818 [2024-12-09 23:20:31.307476] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:38:52.197 00:38:52.197 real 0m25.588s 00:38:52.197 user 0m30.585s 00:38:52.197 sys 0m4.383s 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:52.197 ************************************ 00:38:52.197 END TEST raid_rebuild_test_sb 00:38:52.197 ************************************ 00:38:52.197 23:20:32 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:38:52.197 23:20:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:38:52.197 23:20:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:52.197 23:20:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:52.197 ************************************ 00:38:52.197 START TEST raid_rebuild_test_io 00:38:52.197 ************************************ 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78640 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78640 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78640 ']' 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:52.197 23:20:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:52.197 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:52.197 Zero copy mechanism will not be used. 00:38:52.197 [2024-12-09 23:20:32.649198] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:38:52.197 [2024-12-09 23:20:32.649334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78640 ] 00:38:52.197 [2024-12-09 23:20:32.828615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.457 [2024-12-09 23:20:32.949366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.715 [2024-12-09 23:20:33.166684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:52.715 [2024-12-09 23:20:33.166734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:52.973 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:52.973 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:38:52.973 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:52.973 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:52.973 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.973 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:52.973 BaseBdev1_malloc 00:38:52.973 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.973 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:52.973 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:52.974 [2024-12-09 23:20:33.531310] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:52.974 [2024-12-09 23:20:33.531378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:52.974 [2024-12-09 23:20:33.531415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:52.974 [2024-12-09 23:20:33.531431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:52.974 [2024-12-09 23:20:33.533765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:52.974 [2024-12-09 23:20:33.533940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:52.974 BaseBdev1 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:52.974 BaseBdev2_malloc 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:52.974 [2024-12-09 23:20:33.583971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:52.974 [2024-12-09 23:20:33.584037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:52.974 [2024-12-09 23:20:33.584058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:52.974 [2024-12-09 23:20:33.584075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:52.974 [2024-12-09 23:20:33.586455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:52.974 [2024-12-09 23:20:33.586495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:52.974 BaseBdev2 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.974 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.240 BaseBdev3_malloc 00:38:53.240 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.240 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:38:53.240 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.241 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.241 [2024-12-09 23:20:33.655667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:38:53.241 [2024-12-09 23:20:33.655735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:53.241 [2024-12-09 23:20:33.655760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:53.241 [2024-12-09 23:20:33.655774] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:53.241 [2024-12-09 23:20:33.658173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:53.241 [2024-12-09 23:20:33.658219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:53.241 BaseBdev3 00:38:53.241 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.241 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:53.241 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:53.241 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.241 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.241 BaseBdev4_malloc 00:38:53.241 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.242 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:38:53.242 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.242 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.242 [2024-12-09 23:20:33.714534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:38:53.242 [2024-12-09 23:20:33.714606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:53.242 [2024-12-09 23:20:33.714630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:38:53.242 [2024-12-09 23:20:33.714644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:53.242 [2024-12-09 23:20:33.717067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:53.242 [2024-12-09 23:20:33.717116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:53.242 BaseBdev4 00:38:53.242 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.242 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:38:53.243 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.243 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.243 spare_malloc 00:38:53.243 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.243 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:53.243 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.243 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.243 spare_delay 00:38:53.243 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.243 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:53.243 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.243 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.243 [2024-12-09 23:20:33.783175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:53.243 [2024-12-09 23:20:33.783238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:53.244 [2024-12-09 23:20:33.783261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:38:53.244 [2024-12-09 23:20:33.783276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:53.244 [2024-12-09 23:20:33.785696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:53.244 [2024-12-09 23:20:33.785740] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:53.244 spare 00:38:53.244 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.244 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:38:53.244 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.244 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.244 [2024-12-09 23:20:33.795204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:53.244 [2024-12-09 23:20:33.797277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:53.244 [2024-12-09 23:20:33.797478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:53.244 [2024-12-09 23:20:33.797542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:53.244 [2024-12-09 23:20:33.797633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:38:53.244 [2024-12-09 23:20:33.797649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:38:53.244 [2024-12-09 23:20:33.797928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:53.244 [2024-12-09 23:20:33.798102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:38:53.244 [2024-12-09 23:20:33.798115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:38:53.245 [2024-12-09 23:20:33.798285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.245 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:53.245 "name": "raid_bdev1", 00:38:53.245 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:38:53.245 "strip_size_kb": 0, 00:38:53.245 "state": "online", 00:38:53.246 "raid_level": "raid1", 00:38:53.246 "superblock": false, 00:38:53.246 "num_base_bdevs": 4, 00:38:53.246 "num_base_bdevs_discovered": 4, 00:38:53.246 "num_base_bdevs_operational": 4, 00:38:53.246 "base_bdevs_list": [ 00:38:53.246 { 00:38:53.246 "name": "BaseBdev1", 00:38:53.246 "uuid": "943f423b-560e-5c46-807a-f5e4c512b6d8", 00:38:53.246 "is_configured": true, 00:38:53.246 "data_offset": 0, 00:38:53.246 "data_size": 65536 00:38:53.246 }, 00:38:53.246 { 00:38:53.246 "name": "BaseBdev2", 00:38:53.246 "uuid": "696884f2-d95c-538e-bbba-f9c4e50dd0a3", 00:38:53.246 "is_configured": true, 00:38:53.246 "data_offset": 0, 00:38:53.246 "data_size": 65536 00:38:53.246 }, 00:38:53.246 { 00:38:53.246 "name": "BaseBdev3", 00:38:53.246 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:38:53.246 "is_configured": true, 00:38:53.247 "data_offset": 0, 00:38:53.247 "data_size": 65536 00:38:53.247 }, 00:38:53.247 { 00:38:53.247 "name": "BaseBdev4", 00:38:53.247 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:38:53.247 "is_configured": true, 00:38:53.247 "data_offset": 0, 00:38:53.247 "data_size": 65536 00:38:53.247 } 00:38:53.247 ] 00:38:53.247 }' 00:38:53.247 23:20:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:53.247 23:20:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.858 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:38:53.858 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:53.858 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.859 [2024-12-09 23:20:34.246909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.859 [2024-12-09 23:20:34.342554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:53.859 "name": "raid_bdev1", 00:38:53.859 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:38:53.859 "strip_size_kb": 0, 00:38:53.859 "state": "online", 00:38:53.859 "raid_level": "raid1", 00:38:53.859 "superblock": false, 00:38:53.859 "num_base_bdevs": 4, 00:38:53.859 "num_base_bdevs_discovered": 3, 00:38:53.859 "num_base_bdevs_operational": 3, 00:38:53.859 "base_bdevs_list": [ 00:38:53.859 { 00:38:53.859 "name": null, 00:38:53.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:53.859 "is_configured": false, 00:38:53.859 "data_offset": 0, 00:38:53.859 "data_size": 65536 00:38:53.859 }, 00:38:53.859 { 00:38:53.859 "name": "BaseBdev2", 00:38:53.859 "uuid": "696884f2-d95c-538e-bbba-f9c4e50dd0a3", 00:38:53.859 "is_configured": true, 00:38:53.859 "data_offset": 0, 00:38:53.859 "data_size": 65536 00:38:53.859 }, 00:38:53.859 { 00:38:53.859 "name": "BaseBdev3", 00:38:53.859 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:38:53.859 "is_configured": true, 00:38:53.859 "data_offset": 0, 00:38:53.859 "data_size": 65536 00:38:53.859 }, 00:38:53.859 { 00:38:53.859 "name": "BaseBdev4", 00:38:53.859 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:38:53.859 "is_configured": true, 00:38:53.859 "data_offset": 0, 00:38:53.859 "data_size": 65536 00:38:53.859 } 00:38:53.859 ] 00:38:53.859 }' 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:53.859 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:53.859 [2024-12-09 23:20:34.466764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:38:53.859 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:53.859 Zero copy mechanism will not be used. 00:38:53.859 Running I/O for 60 seconds... 00:38:54.426 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:54.426 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.426 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:54.426 [2024-12-09 23:20:34.812967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:54.426 23:20:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.426 23:20:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:38:54.426 [2024-12-09 23:20:34.883866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:38:54.426 [2024-12-09 23:20:34.886294] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:54.426 [2024-12-09 23:20:35.016275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:54.688 [2024-12-09 23:20:35.139701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:54.688 [2024-12-09 23:20:35.140672] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:55.514 184.00 IOPS, 552.00 MiB/s [2024-12-09T23:20:36.150Z] 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:55.514 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:55.514 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:55.515 "name": "raid_bdev1", 00:38:55.515 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:38:55.515 "strip_size_kb": 0, 00:38:55.515 "state": "online", 00:38:55.515 "raid_level": "raid1", 00:38:55.515 "superblock": false, 00:38:55.515 "num_base_bdevs": 4, 00:38:55.515 "num_base_bdevs_discovered": 4, 00:38:55.515 "num_base_bdevs_operational": 4, 00:38:55.515 "process": { 00:38:55.515 "type": "rebuild", 00:38:55.515 "target": "spare", 00:38:55.515 "progress": { 00:38:55.515 "blocks": 14336, 00:38:55.515 "percent": 21 00:38:55.515 } 00:38:55.515 }, 00:38:55.515 "base_bdevs_list": [ 00:38:55.515 { 00:38:55.515 "name": "spare", 00:38:55.515 "uuid": "5c777728-a212-55b1-bdaa-38d53b82c0c4", 00:38:55.515 "is_configured": true, 00:38:55.515 "data_offset": 0, 00:38:55.515 "data_size": 65536 00:38:55.515 }, 00:38:55.515 { 00:38:55.515 "name": "BaseBdev2", 00:38:55.515 "uuid": "696884f2-d95c-538e-bbba-f9c4e50dd0a3", 00:38:55.515 "is_configured": true, 00:38:55.515 "data_offset": 0, 00:38:55.515 "data_size": 65536 00:38:55.515 }, 00:38:55.515 { 00:38:55.515 "name": "BaseBdev3", 00:38:55.515 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:38:55.515 "is_configured": true, 00:38:55.515 "data_offset": 0, 00:38:55.515 "data_size": 65536 00:38:55.515 }, 00:38:55.515 { 00:38:55.515 "name": "BaseBdev4", 00:38:55.515 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:38:55.515 "is_configured": true, 00:38:55.515 "data_offset": 0, 00:38:55.515 "data_size": 65536 00:38:55.515 } 00:38:55.515 ] 00:38:55.515 }' 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:55.515 [2024-12-09 23:20:35.932763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:55.515 23:20:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:55.515 [2024-12-09 23:20:36.021254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:55.515 [2024-12-09 23:20:36.073081] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:55.515 [2024-12-09 23:20:36.077080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:55.515 [2024-12-09 23:20:36.077134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:55.515 [2024-12-09 23:20:36.077159] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:55.515 [2024-12-09 23:20:36.102131] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:55.515 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:55.774 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.774 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:55.774 "name": "raid_bdev1", 00:38:55.774 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:38:55.774 "strip_size_kb": 0, 00:38:55.774 "state": "online", 00:38:55.774 "raid_level": "raid1", 00:38:55.774 "superblock": false, 00:38:55.774 "num_base_bdevs": 4, 00:38:55.774 "num_base_bdevs_discovered": 3, 00:38:55.774 "num_base_bdevs_operational": 3, 00:38:55.774 "base_bdevs_list": [ 00:38:55.774 { 00:38:55.774 "name": null, 00:38:55.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.774 "is_configured": false, 00:38:55.774 "data_offset": 0, 00:38:55.774 "data_size": 65536 00:38:55.774 }, 00:38:55.774 { 00:38:55.774 "name": "BaseBdev2", 00:38:55.774 "uuid": "696884f2-d95c-538e-bbba-f9c4e50dd0a3", 00:38:55.774 "is_configured": true, 00:38:55.774 "data_offset": 0, 00:38:55.775 "data_size": 65536 00:38:55.775 }, 00:38:55.775 { 00:38:55.775 "name": "BaseBdev3", 00:38:55.775 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:38:55.775 "is_configured": true, 00:38:55.775 "data_offset": 0, 00:38:55.775 "data_size": 65536 00:38:55.775 }, 00:38:55.775 { 00:38:55.775 "name": "BaseBdev4", 00:38:55.775 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:38:55.775 "is_configured": true, 00:38:55.775 "data_offset": 0, 00:38:55.775 "data_size": 65536 00:38:55.775 } 00:38:55.775 ] 00:38:55.775 }' 00:38:55.775 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:55.775 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:56.034 171.50 IOPS, 514.50 MiB/s [2024-12-09T23:20:36.670Z] 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:56.034 "name": "raid_bdev1", 00:38:56.034 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:38:56.034 "strip_size_kb": 0, 00:38:56.034 "state": "online", 00:38:56.034 "raid_level": "raid1", 00:38:56.034 "superblock": false, 00:38:56.034 "num_base_bdevs": 4, 00:38:56.034 "num_base_bdevs_discovered": 3, 00:38:56.034 "num_base_bdevs_operational": 3, 00:38:56.034 "base_bdevs_list": [ 00:38:56.034 { 00:38:56.034 "name": null, 00:38:56.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:56.034 "is_configured": false, 00:38:56.034 "data_offset": 0, 00:38:56.034 "data_size": 65536 00:38:56.034 }, 00:38:56.034 { 00:38:56.034 "name": "BaseBdev2", 00:38:56.034 "uuid": "696884f2-d95c-538e-bbba-f9c4e50dd0a3", 00:38:56.034 "is_configured": true, 00:38:56.034 "data_offset": 0, 00:38:56.034 "data_size": 65536 00:38:56.034 }, 00:38:56.034 { 00:38:56.034 "name": "BaseBdev3", 00:38:56.034 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:38:56.034 "is_configured": true, 00:38:56.034 "data_offset": 0, 00:38:56.034 "data_size": 65536 00:38:56.034 }, 00:38:56.034 { 00:38:56.034 "name": "BaseBdev4", 00:38:56.034 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:38:56.034 "is_configured": true, 00:38:56.034 "data_offset": 0, 00:38:56.034 "data_size": 65536 00:38:56.034 } 00:38:56.034 ] 00:38:56.034 }' 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.034 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:56.292 [2024-12-09 23:20:36.678118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:56.292 23:20:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.292 23:20:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:38:56.292 [2024-12-09 23:20:36.720549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:38:56.292 [2024-12-09 23:20:36.723022] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:56.292 [2024-12-09 23:20:36.832412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:56.292 [2024-12-09 23:20:36.833831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:56.551 [2024-12-09 23:20:37.043491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:56.551 [2024-12-09 23:20:37.043833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:56.809 [2024-12-09 23:20:37.411376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:38:57.326 169.67 IOPS, 509.00 MiB/s [2024-12-09T23:20:37.962Z] 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:57.326 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:57.326 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:57.326 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:57.326 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:57.326 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:57.326 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:57.326 23:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.326 23:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:57.326 23:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.326 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:57.326 "name": "raid_bdev1", 00:38:57.326 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:38:57.326 "strip_size_kb": 0, 00:38:57.326 "state": "online", 00:38:57.326 "raid_level": "raid1", 00:38:57.326 "superblock": false, 00:38:57.326 "num_base_bdevs": 4, 00:38:57.326 "num_base_bdevs_discovered": 4, 00:38:57.326 "num_base_bdevs_operational": 4, 00:38:57.326 "process": { 00:38:57.326 "type": "rebuild", 00:38:57.326 "target": "spare", 00:38:57.326 "progress": { 00:38:57.327 "blocks": 12288, 00:38:57.327 "percent": 18 00:38:57.327 } 00:38:57.327 }, 00:38:57.327 "base_bdevs_list": [ 00:38:57.327 { 00:38:57.327 "name": "spare", 00:38:57.327 "uuid": "5c777728-a212-55b1-bdaa-38d53b82c0c4", 00:38:57.327 "is_configured": true, 00:38:57.327 "data_offset": 0, 00:38:57.327 "data_size": 65536 00:38:57.327 }, 00:38:57.327 { 00:38:57.327 "name": "BaseBdev2", 00:38:57.327 "uuid": "696884f2-d95c-538e-bbba-f9c4e50dd0a3", 00:38:57.327 "is_configured": true, 00:38:57.327 "data_offset": 0, 00:38:57.327 "data_size": 65536 00:38:57.327 }, 00:38:57.327 { 00:38:57.327 "name": "BaseBdev3", 00:38:57.327 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:38:57.327 "is_configured": true, 00:38:57.327 "data_offset": 0, 00:38:57.327 "data_size": 65536 00:38:57.327 }, 00:38:57.327 { 00:38:57.327 "name": "BaseBdev4", 00:38:57.327 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:38:57.327 "is_configured": true, 00:38:57.327 "data_offset": 0, 00:38:57.327 "data_size": 65536 00:38:57.327 } 00:38:57.327 ] 00:38:57.327 }' 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:38:57.327 [2024-12-09 23:20:37.862562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.327 23:20:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:57.327 [2024-12-09 23:20:37.866628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:57.586 [2024-12-09 23:20:38.205218] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:38:57.586 [2024-12-09 23:20:38.205524] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:38:57.586 23:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.586 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:38:57.586 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:38:57.586 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:57.586 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:57.586 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:57.586 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:57.586 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:57.844 "name": "raid_bdev1", 00:38:57.844 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:38:57.844 "strip_size_kb": 0, 00:38:57.844 "state": "online", 00:38:57.844 "raid_level": "raid1", 00:38:57.844 "superblock": false, 00:38:57.844 "num_base_bdevs": 4, 00:38:57.844 "num_base_bdevs_discovered": 3, 00:38:57.844 "num_base_bdevs_operational": 3, 00:38:57.844 "process": { 00:38:57.844 "type": "rebuild", 00:38:57.844 "target": "spare", 00:38:57.844 "progress": { 00:38:57.844 "blocks": 18432, 00:38:57.844 "percent": 28 00:38:57.844 } 00:38:57.844 }, 00:38:57.844 "base_bdevs_list": [ 00:38:57.844 { 00:38:57.844 "name": "spare", 00:38:57.844 "uuid": "5c777728-a212-55b1-bdaa-38d53b82c0c4", 00:38:57.844 "is_configured": true, 00:38:57.844 "data_offset": 0, 00:38:57.844 "data_size": 65536 00:38:57.844 }, 00:38:57.844 { 00:38:57.844 "name": null, 00:38:57.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:57.844 "is_configured": false, 00:38:57.844 "data_offset": 0, 00:38:57.844 "data_size": 65536 00:38:57.844 }, 00:38:57.844 { 00:38:57.844 "name": "BaseBdev3", 00:38:57.844 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:38:57.844 "is_configured": true, 00:38:57.844 "data_offset": 0, 00:38:57.844 "data_size": 65536 00:38:57.844 }, 00:38:57.844 { 00:38:57.844 "name": "BaseBdev4", 00:38:57.844 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:38:57.844 "is_configured": true, 00:38:57.844 "data_offset": 0, 00:38:57.844 "data_size": 65536 00:38:57.844 } 00:38:57.844 ] 00:38:57.844 }' 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.844 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:57.844 "name": "raid_bdev1", 00:38:57.844 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:38:57.844 "strip_size_kb": 0, 00:38:57.844 "state": "online", 00:38:57.844 "raid_level": "raid1", 00:38:57.844 "superblock": false, 00:38:57.844 "num_base_bdevs": 4, 00:38:57.844 "num_base_bdevs_discovered": 3, 00:38:57.844 "num_base_bdevs_operational": 3, 00:38:57.844 "process": { 00:38:57.844 "type": "rebuild", 00:38:57.844 "target": "spare", 00:38:57.844 "progress": { 00:38:57.844 "blocks": 20480, 00:38:57.844 "percent": 31 00:38:57.844 } 00:38:57.844 }, 00:38:57.844 "base_bdevs_list": [ 00:38:57.844 { 00:38:57.844 "name": "spare", 00:38:57.844 "uuid": "5c777728-a212-55b1-bdaa-38d53b82c0c4", 00:38:57.845 "is_configured": true, 00:38:57.845 "data_offset": 0, 00:38:57.845 "data_size": 65536 00:38:57.845 }, 00:38:57.845 { 00:38:57.845 "name": null, 00:38:57.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:57.845 "is_configured": false, 00:38:57.845 "data_offset": 0, 00:38:57.845 "data_size": 65536 00:38:57.845 }, 00:38:57.845 { 00:38:57.845 "name": "BaseBdev3", 00:38:57.845 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:38:57.845 "is_configured": true, 00:38:57.845 "data_offset": 0, 00:38:57.845 "data_size": 65536 00:38:57.845 }, 00:38:57.845 { 00:38:57.845 "name": "BaseBdev4", 00:38:57.845 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:38:57.845 "is_configured": true, 00:38:57.845 "data_offset": 0, 00:38:57.845 "data_size": 65536 00:38:57.845 } 00:38:57.845 ] 00:38:57.845 }' 00:38:57.845 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:57.845 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:57.845 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:57.845 [2024-12-09 23:20:38.466359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:38:58.103 140.25 IOPS, 420.75 MiB/s [2024-12-09T23:20:38.739Z] 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:58.103 23:20:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:58.363 [2024-12-09 23:20:38.801834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:38:58.363 [2024-12-09 23:20:38.931443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:38:58.931 123.20 IOPS, 369.60 MiB/s [2024-12-09T23:20:39.567Z] 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:58.931 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:58.932 "name": "raid_bdev1", 00:38:58.932 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:38:58.932 "strip_size_kb": 0, 00:38:58.932 "state": "online", 00:38:58.932 "raid_level": "raid1", 00:38:58.932 "superblock": false, 00:38:58.932 "num_base_bdevs": 4, 00:38:58.932 "num_base_bdevs_discovered": 3, 00:38:58.932 "num_base_bdevs_operational": 3, 00:38:58.932 "process": { 00:38:58.932 "type": "rebuild", 00:38:58.932 "target": "spare", 00:38:58.932 "progress": { 00:38:58.932 "blocks": 36864, 00:38:58.932 "percent": 56 00:38:58.932 } 00:38:58.932 }, 00:38:58.932 "base_bdevs_list": [ 00:38:58.932 { 00:38:58.932 "name": "spare", 00:38:58.932 "uuid": "5c777728-a212-55b1-bdaa-38d53b82c0c4", 00:38:58.932 "is_configured": true, 00:38:58.932 "data_offset": 0, 00:38:58.932 "data_size": 65536 00:38:58.932 }, 00:38:58.932 { 00:38:58.932 "name": null, 00:38:58.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:58.932 "is_configured": false, 00:38:58.932 "data_offset": 0, 00:38:58.932 "data_size": 65536 00:38:58.932 }, 00:38:58.932 { 00:38:58.932 "name": "BaseBdev3", 00:38:58.932 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:38:58.932 "is_configured": true, 00:38:58.932 "data_offset": 0, 00:38:58.932 "data_size": 65536 00:38:58.932 }, 00:38:58.932 { 00:38:58.932 "name": "BaseBdev4", 00:38:58.932 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:38:58.932 "is_configured": true, 00:38:58.932 "data_offset": 0, 00:38:58.932 "data_size": 65536 00:38:58.932 } 00:38:58.932 ] 00:38:58.932 }' 00:38:58.932 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:59.192 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:59.192 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:59.192 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:59.192 23:20:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:59.451 [2024-12-09 23:20:39.859108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:38:59.451 [2024-12-09 23:20:39.968143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:38:59.709 [2024-12-09 23:20:40.297102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:39:00.226 109.33 IOPS, 328.00 MiB/s [2024-12-09T23:20:40.862Z] 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.226 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:00.227 "name": "raid_bdev1", 00:39:00.227 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:39:00.227 "strip_size_kb": 0, 00:39:00.227 "state": "online", 00:39:00.227 "raid_level": "raid1", 00:39:00.227 "superblock": false, 00:39:00.227 "num_base_bdevs": 4, 00:39:00.227 "num_base_bdevs_discovered": 3, 00:39:00.227 "num_base_bdevs_operational": 3, 00:39:00.227 "process": { 00:39:00.227 "type": "rebuild", 00:39:00.227 "target": "spare", 00:39:00.227 "progress": { 00:39:00.227 "blocks": 57344, 00:39:00.227 "percent": 87 00:39:00.227 } 00:39:00.227 }, 00:39:00.227 "base_bdevs_list": [ 00:39:00.227 { 00:39:00.227 "name": "spare", 00:39:00.227 "uuid": "5c777728-a212-55b1-bdaa-38d53b82c0c4", 00:39:00.227 "is_configured": true, 00:39:00.227 "data_offset": 0, 00:39:00.227 "data_size": 65536 00:39:00.227 }, 00:39:00.227 { 00:39:00.227 "name": null, 00:39:00.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:00.227 "is_configured": false, 00:39:00.227 "data_offset": 0, 00:39:00.227 "data_size": 65536 00:39:00.227 }, 00:39:00.227 { 00:39:00.227 "name": "BaseBdev3", 00:39:00.227 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:39:00.227 "is_configured": true, 00:39:00.227 "data_offset": 0, 00:39:00.227 "data_size": 65536 00:39:00.227 }, 00:39:00.227 { 00:39:00.227 "name": "BaseBdev4", 00:39:00.227 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:39:00.227 "is_configured": true, 00:39:00.227 "data_offset": 0, 00:39:00.227 "data_size": 65536 00:39:00.227 } 00:39:00.227 ] 00:39:00.227 }' 00:39:00.227 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:00.227 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:00.227 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:00.227 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:00.227 23:20:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:00.486 [2024-12-09 23:20:41.063193] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:00.744 [2024-12-09 23:20:41.165226] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:00.744 [2024-12-09 23:20:41.168089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:01.260 100.57 IOPS, 301.71 MiB/s [2024-12-09T23:20:41.896Z] 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:01.260 "name": "raid_bdev1", 00:39:01.260 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:39:01.260 "strip_size_kb": 0, 00:39:01.260 "state": "online", 00:39:01.260 "raid_level": "raid1", 00:39:01.260 "superblock": false, 00:39:01.260 "num_base_bdevs": 4, 00:39:01.260 "num_base_bdevs_discovered": 3, 00:39:01.260 "num_base_bdevs_operational": 3, 00:39:01.260 "base_bdevs_list": [ 00:39:01.260 { 00:39:01.260 "name": "spare", 00:39:01.260 "uuid": "5c777728-a212-55b1-bdaa-38d53b82c0c4", 00:39:01.260 "is_configured": true, 00:39:01.260 "data_offset": 0, 00:39:01.260 "data_size": 65536 00:39:01.260 }, 00:39:01.260 { 00:39:01.260 "name": null, 00:39:01.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:01.260 "is_configured": false, 00:39:01.260 "data_offset": 0, 00:39:01.260 "data_size": 65536 00:39:01.260 }, 00:39:01.260 { 00:39:01.260 "name": "BaseBdev3", 00:39:01.260 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:39:01.260 "is_configured": true, 00:39:01.260 "data_offset": 0, 00:39:01.260 "data_size": 65536 00:39:01.260 }, 00:39:01.260 { 00:39:01.260 "name": "BaseBdev4", 00:39:01.260 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:39:01.260 "is_configured": true, 00:39:01.260 "data_offset": 0, 00:39:01.260 "data_size": 65536 00:39:01.260 } 00:39:01.260 ] 00:39:01.260 }' 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:01.260 23:20:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.518 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:01.518 "name": "raid_bdev1", 00:39:01.518 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:39:01.518 "strip_size_kb": 0, 00:39:01.518 "state": "online", 00:39:01.518 "raid_level": "raid1", 00:39:01.518 "superblock": false, 00:39:01.518 "num_base_bdevs": 4, 00:39:01.518 "num_base_bdevs_discovered": 3, 00:39:01.518 "num_base_bdevs_operational": 3, 00:39:01.518 "base_bdevs_list": [ 00:39:01.518 { 00:39:01.518 "name": "spare", 00:39:01.518 "uuid": "5c777728-a212-55b1-bdaa-38d53b82c0c4", 00:39:01.518 "is_configured": true, 00:39:01.518 "data_offset": 0, 00:39:01.518 "data_size": 65536 00:39:01.518 }, 00:39:01.518 { 00:39:01.518 "name": null, 00:39:01.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:01.518 "is_configured": false, 00:39:01.518 "data_offset": 0, 00:39:01.518 "data_size": 65536 00:39:01.518 }, 00:39:01.518 { 00:39:01.518 "name": "BaseBdev3", 00:39:01.518 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:39:01.518 "is_configured": true, 00:39:01.518 "data_offset": 0, 00:39:01.518 "data_size": 65536 00:39:01.518 }, 00:39:01.518 { 00:39:01.518 "name": "BaseBdev4", 00:39:01.518 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:39:01.518 "is_configured": true, 00:39:01.518 "data_offset": 0, 00:39:01.518 "data_size": 65536 00:39:01.518 } 00:39:01.518 ] 00:39:01.518 }' 00:39:01.518 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:01.518 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:01.518 23:20:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.518 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:01.518 "name": "raid_bdev1", 00:39:01.518 "uuid": "2f68873f-016c-4878-b978-e73e076bc166", 00:39:01.518 "strip_size_kb": 0, 00:39:01.518 "state": "online", 00:39:01.518 "raid_level": "raid1", 00:39:01.518 "superblock": false, 00:39:01.518 "num_base_bdevs": 4, 00:39:01.518 "num_base_bdevs_discovered": 3, 00:39:01.518 "num_base_bdevs_operational": 3, 00:39:01.518 "base_bdevs_list": [ 00:39:01.518 { 00:39:01.518 "name": "spare", 00:39:01.518 "uuid": "5c777728-a212-55b1-bdaa-38d53b82c0c4", 00:39:01.518 "is_configured": true, 00:39:01.518 "data_offset": 0, 00:39:01.518 "data_size": 65536 00:39:01.518 }, 00:39:01.518 { 00:39:01.518 "name": null, 00:39:01.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:01.518 "is_configured": false, 00:39:01.518 "data_offset": 0, 00:39:01.518 "data_size": 65536 00:39:01.518 }, 00:39:01.518 { 00:39:01.518 "name": "BaseBdev3", 00:39:01.518 "uuid": "ba8b0f95-831c-5f96-85b6-7216845ddb5a", 00:39:01.518 "is_configured": true, 00:39:01.518 "data_offset": 0, 00:39:01.518 "data_size": 65536 00:39:01.518 }, 00:39:01.518 { 00:39:01.518 "name": "BaseBdev4", 00:39:01.518 "uuid": "41b54151-6a2e-5e10-97c0-1dda831b2c8c", 00:39:01.518 "is_configured": true, 00:39:01.518 "data_offset": 0, 00:39:01.518 "data_size": 65536 00:39:01.518 } 00:39:01.518 ] 00:39:01.518 }' 00:39:01.519 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:01.519 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:02.163 [2024-12-09 23:20:42.458212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:02.163 [2024-12-09 23:20:42.458412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:02.163 00:39:02.163 Latency(us) 00:39:02.163 [2024-12-09T23:20:42.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:02.163 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:39:02.163 raid_bdev1 : 8.01 94.08 282.25 0.00 0.00 16089.67 305.97 117069.93 00:39:02.163 [2024-12-09T23:20:42.799Z] =================================================================================================================== 00:39:02.163 [2024-12-09T23:20:42.799Z] Total : 94.08 282.25 0.00 0.00 16089.67 305.97 117069.93 00:39:02.163 { 00:39:02.163 "results": [ 00:39:02.163 { 00:39:02.163 "job": "raid_bdev1", 00:39:02.163 "core_mask": "0x1", 00:39:02.163 "workload": "randrw", 00:39:02.163 "percentage": 50, 00:39:02.163 "status": "finished", 00:39:02.163 "queue_depth": 2, 00:39:02.163 "io_size": 3145728, 00:39:02.163 "runtime": 8.014049, 00:39:02.163 "iops": 94.0847753738466, 00:39:02.163 "mibps": 282.2543261215398, 00:39:02.163 "io_failed": 0, 00:39:02.163 "io_timeout": 0, 00:39:02.163 "avg_latency_us": 16089.665313774995, 00:39:02.163 "min_latency_us": 305.96626506024097, 00:39:02.163 "max_latency_us": 117069.93092369477 00:39:02.163 } 00:39:02.163 ], 00:39:02.163 "core_count": 1 00:39:02.163 } 00:39:02.163 [2024-12-09 23:20:42.492513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:02.163 [2024-12-09 23:20:42.492604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:02.163 [2024-12-09 23:20:42.492712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:02.163 [2024-12-09 23:20:42.492729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:02.163 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:39:02.163 /dev/nbd0 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:02.421 1+0 records in 00:39:02.421 1+0 records out 00:39:02.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033458 s, 12.2 MB/s 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:02.421 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:02.422 23:20:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:39:02.422 /dev/nbd1 00:39:02.422 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:02.680 1+0 records in 00:39:02.680 1+0 records out 00:39:02.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358221 s, 11.4 MB/s 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:02.680 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:02.938 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:39:03.196 /dev/nbd1 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:03.196 1+0 records in 00:39:03.196 1+0 records out 00:39:03.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331916 s, 12.3 MB/s 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:03.196 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:39:03.455 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:39:03.455 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:03.455 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:39:03.455 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:03.455 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:39:03.455 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:03.455 23:20:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:03.713 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78640 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78640 ']' 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78640 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78640 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:03.972 killing process with pid 78640 00:39:03.972 Received shutdown signal, test time was about 9.946195 seconds 00:39:03.972 00:39:03.972 Latency(us) 00:39:03.972 [2024-12-09T23:20:44.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.972 [2024-12-09T23:20:44.608Z] =================================================================================================================== 00:39:03.972 [2024-12-09T23:20:44.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78640' 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78640 00:39:03.972 [2024-12-09 23:20:44.399120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:03.972 23:20:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78640 00:39:04.230 [2024-12-09 23:20:44.830211] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:05.611 23:20:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:39:05.611 00:39:05.611 real 0m13.506s 00:39:05.611 user 0m16.883s 00:39:05.611 sys 0m2.064s 00:39:05.611 23:20:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:05.611 ************************************ 00:39:05.611 END TEST raid_rebuild_test_io 00:39:05.611 ************************************ 00:39:05.611 23:20:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:05.611 23:20:46 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:39:05.611 23:20:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:39:05.611 23:20:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:05.611 23:20:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:05.612 ************************************ 00:39:05.612 START TEST raid_rebuild_test_sb_io 00:39:05.612 ************************************ 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79050 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79050 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79050 ']' 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:05.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:05.612 23:20:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:05.612 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:05.612 Zero copy mechanism will not be used. 00:39:05.612 [2024-12-09 23:20:46.226208] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:05.612 [2024-12-09 23:20:46.226339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79050 ] 00:39:05.871 [2024-12-09 23:20:46.409213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.130 [2024-12-09 23:20:46.529089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.130 [2024-12-09 23:20:46.718070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:06.130 [2024-12-09 23:20:46.718141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:06.698 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:06.698 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:39:06.698 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:06.698 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:39:06.698 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.698 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.698 BaseBdev1_malloc 00:39:06.698 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.698 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:06.698 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.698 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.698 [2024-12-09 23:20:47.118875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:06.698 [2024-12-09 23:20:47.118948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:06.698 [2024-12-09 23:20:47.118974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:06.698 [2024-12-09 23:20:47.118989] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:06.698 [2024-12-09 23:20:47.121458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:06.699 [2024-12-09 23:20:47.121502] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:06.699 BaseBdev1 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 BaseBdev2_malloc 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 [2024-12-09 23:20:47.179024] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:06.699 [2024-12-09 23:20:47.179222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:06.699 [2024-12-09 23:20:47.179255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:06.699 [2024-12-09 23:20:47.179271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:06.699 [2024-12-09 23:20:47.181807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:06.699 [2024-12-09 23:20:47.181851] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:06.699 BaseBdev2 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 BaseBdev3_malloc 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 [2024-12-09 23:20:47.251344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:39:06.699 [2024-12-09 23:20:47.251428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:06.699 [2024-12-09 23:20:47.251455] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:06.699 [2024-12-09 23:20:47.251471] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:06.699 [2024-12-09 23:20:47.253956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:06.699 [2024-12-09 23:20:47.254123] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:39:06.699 BaseBdev3 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 BaseBdev4_malloc 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 [2024-12-09 23:20:47.311407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:39:06.699 [2024-12-09 23:20:47.311475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:06.699 [2024-12-09 23:20:47.311500] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:06.699 [2024-12-09 23:20:47.311515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:06.699 [2024-12-09 23:20:47.313864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:06.699 [2024-12-09 23:20:47.313913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:39:06.699 BaseBdev4 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.699 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.958 spare_malloc 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.958 spare_delay 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.958 [2024-12-09 23:20:47.383143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:06.958 [2024-12-09 23:20:47.383205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:06.958 [2024-12-09 23:20:47.383228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:39:06.958 [2024-12-09 23:20:47.383243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:06.958 [2024-12-09 23:20:47.385716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:06.958 [2024-12-09 23:20:47.385762] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:06.958 spare 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.958 [2024-12-09 23:20:47.395176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:06.958 [2024-12-09 23:20:47.397517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:06.958 [2024-12-09 23:20:47.397584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:06.958 [2024-12-09 23:20:47.397640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:06.958 [2024-12-09 23:20:47.397844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:06.958 [2024-12-09 23:20:47.397860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:06.958 [2024-12-09 23:20:47.398159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:06.958 [2024-12-09 23:20:47.398374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:06.958 [2024-12-09 23:20:47.398386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:06.958 [2024-12-09 23:20:47.398588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.958 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:06.958 "name": "raid_bdev1", 00:39:06.958 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:06.958 "strip_size_kb": 0, 00:39:06.958 "state": "online", 00:39:06.958 "raid_level": "raid1", 00:39:06.958 "superblock": true, 00:39:06.958 "num_base_bdevs": 4, 00:39:06.958 "num_base_bdevs_discovered": 4, 00:39:06.958 "num_base_bdevs_operational": 4, 00:39:06.958 "base_bdevs_list": [ 00:39:06.958 { 00:39:06.958 "name": "BaseBdev1", 00:39:06.958 "uuid": "36f2ec29-cbda-5b8e-bf6f-d21e47ed8950", 00:39:06.958 "is_configured": true, 00:39:06.958 "data_offset": 2048, 00:39:06.958 "data_size": 63488 00:39:06.958 }, 00:39:06.958 { 00:39:06.958 "name": "BaseBdev2", 00:39:06.958 "uuid": "1d946f3d-49bb-5db8-8721-4dfefe74cabf", 00:39:06.958 "is_configured": true, 00:39:06.958 "data_offset": 2048, 00:39:06.958 "data_size": 63488 00:39:06.958 }, 00:39:06.958 { 00:39:06.958 "name": "BaseBdev3", 00:39:06.958 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:06.958 "is_configured": true, 00:39:06.958 "data_offset": 2048, 00:39:06.958 "data_size": 63488 00:39:06.958 }, 00:39:06.958 { 00:39:06.958 "name": "BaseBdev4", 00:39:06.958 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:06.958 "is_configured": true, 00:39:06.958 "data_offset": 2048, 00:39:06.959 "data_size": 63488 00:39:06.959 } 00:39:06.959 ] 00:39:06.959 }' 00:39:06.959 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:06.959 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:07.527 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:07.527 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:39:07.527 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.527 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:07.527 [2024-12-09 23:20:47.882918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:07.528 [2024-12-09 23:20:47.962538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:07.528 23:20:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.528 23:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:07.528 "name": "raid_bdev1", 00:39:07.528 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:07.528 "strip_size_kb": 0, 00:39:07.528 "state": "online", 00:39:07.528 "raid_level": "raid1", 00:39:07.528 "superblock": true, 00:39:07.528 "num_base_bdevs": 4, 00:39:07.528 "num_base_bdevs_discovered": 3, 00:39:07.528 "num_base_bdevs_operational": 3, 00:39:07.528 "base_bdevs_list": [ 00:39:07.528 { 00:39:07.528 "name": null, 00:39:07.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:07.528 "is_configured": false, 00:39:07.528 "data_offset": 0, 00:39:07.528 "data_size": 63488 00:39:07.528 }, 00:39:07.528 { 00:39:07.528 "name": "BaseBdev2", 00:39:07.528 "uuid": "1d946f3d-49bb-5db8-8721-4dfefe74cabf", 00:39:07.528 "is_configured": true, 00:39:07.528 "data_offset": 2048, 00:39:07.528 "data_size": 63488 00:39:07.528 }, 00:39:07.528 { 00:39:07.528 "name": "BaseBdev3", 00:39:07.528 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:07.528 "is_configured": true, 00:39:07.528 "data_offset": 2048, 00:39:07.528 "data_size": 63488 00:39:07.528 }, 00:39:07.528 { 00:39:07.528 "name": "BaseBdev4", 00:39:07.528 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:07.528 "is_configured": true, 00:39:07.528 "data_offset": 2048, 00:39:07.528 "data_size": 63488 00:39:07.528 } 00:39:07.528 ] 00:39:07.528 }' 00:39:07.528 23:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:07.528 23:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:07.528 [2024-12-09 23:20:48.046672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:39:07.528 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:07.528 Zero copy mechanism will not be used. 00:39:07.528 Running I/O for 60 seconds... 00:39:07.787 23:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:07.787 23:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.787 23:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:08.046 [2024-12-09 23:20:48.439294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:08.046 23:20:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.046 23:20:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:39:08.046 [2024-12-09 23:20:48.495173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:39:08.046 [2024-12-09 23:20:48.497459] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:08.046 [2024-12-09 23:20:48.614443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:08.046 [2024-12-09 23:20:48.614986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:08.306 [2024-12-09 23:20:48.833660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:08.306 [2024-12-09 23:20:48.834426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:08.565 136.00 IOPS, 408.00 MiB/s [2024-12-09T23:20:49.201Z] [2024-12-09 23:20:49.179699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:08.925 [2024-12-09 23:20:49.304904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:08.925 [2024-12-09 23:20:49.305652] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:08.925 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:08.925 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:08.925 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:08.925 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:08.925 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:08.925 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:08.925 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:08.925 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.925 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:08.925 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.220 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:09.221 "name": "raid_bdev1", 00:39:09.221 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:09.221 "strip_size_kb": 0, 00:39:09.221 "state": "online", 00:39:09.221 "raid_level": "raid1", 00:39:09.221 "superblock": true, 00:39:09.221 "num_base_bdevs": 4, 00:39:09.221 "num_base_bdevs_discovered": 4, 00:39:09.221 "num_base_bdevs_operational": 4, 00:39:09.221 "process": { 00:39:09.221 "type": "rebuild", 00:39:09.221 "target": "spare", 00:39:09.221 "progress": { 00:39:09.221 "blocks": 10240, 00:39:09.221 "percent": 16 00:39:09.221 } 00:39:09.221 }, 00:39:09.221 "base_bdevs_list": [ 00:39:09.221 { 00:39:09.221 "name": "spare", 00:39:09.221 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:09.221 "is_configured": true, 00:39:09.221 "data_offset": 2048, 00:39:09.221 "data_size": 63488 00:39:09.221 }, 00:39:09.221 { 00:39:09.221 "name": "BaseBdev2", 00:39:09.221 "uuid": "1d946f3d-49bb-5db8-8721-4dfefe74cabf", 00:39:09.221 "is_configured": true, 00:39:09.221 "data_offset": 2048, 00:39:09.221 "data_size": 63488 00:39:09.221 }, 00:39:09.221 { 00:39:09.221 "name": "BaseBdev3", 00:39:09.221 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:09.221 "is_configured": true, 00:39:09.221 "data_offset": 2048, 00:39:09.221 "data_size": 63488 00:39:09.221 }, 00:39:09.221 { 00:39:09.221 "name": "BaseBdev4", 00:39:09.221 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:09.221 "is_configured": true, 00:39:09.221 "data_offset": 2048, 00:39:09.221 "data_size": 63488 00:39:09.221 } 00:39:09.221 ] 00:39:09.221 }' 00:39:09.221 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:09.221 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:09.221 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:09.221 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:09.222 [2024-12-09 23:20:49.631827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:09.222 [2024-12-09 23:20:49.642454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:09.222 [2024-12-09 23:20:49.643860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:09.222 [2024-12-09 23:20:49.745250] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:09.222 [2024-12-09 23:20:49.755077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:09.222 [2024-12-09 23:20:49.755286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:09.222 [2024-12-09 23:20:49.755313] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:09.222 [2024-12-09 23:20:49.778238] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:09.222 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.487 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:09.487 "name": "raid_bdev1", 00:39:09.487 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:09.487 "strip_size_kb": 0, 00:39:09.487 "state": "online", 00:39:09.487 "raid_level": "raid1", 00:39:09.487 "superblock": true, 00:39:09.487 "num_base_bdevs": 4, 00:39:09.487 "num_base_bdevs_discovered": 3, 00:39:09.487 "num_base_bdevs_operational": 3, 00:39:09.487 "base_bdevs_list": [ 00:39:09.487 { 00:39:09.487 "name": null, 00:39:09.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:09.487 "is_configured": false, 00:39:09.487 "data_offset": 0, 00:39:09.487 "data_size": 63488 00:39:09.487 }, 00:39:09.487 { 00:39:09.487 "name": "BaseBdev2", 00:39:09.487 "uuid": "1d946f3d-49bb-5db8-8721-4dfefe74cabf", 00:39:09.487 "is_configured": true, 00:39:09.487 "data_offset": 2048, 00:39:09.487 "data_size": 63488 00:39:09.487 }, 00:39:09.487 { 00:39:09.487 "name": "BaseBdev3", 00:39:09.487 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:09.487 "is_configured": true, 00:39:09.487 "data_offset": 2048, 00:39:09.487 "data_size": 63488 00:39:09.487 }, 00:39:09.487 { 00:39:09.487 "name": "BaseBdev4", 00:39:09.487 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:09.487 "is_configured": true, 00:39:09.487 "data_offset": 2048, 00:39:09.487 "data_size": 63488 00:39:09.487 } 00:39:09.487 ] 00:39:09.487 }' 00:39:09.487 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:09.487 23:20:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:09.747 129.50 IOPS, 388.50 MiB/s [2024-12-09T23:20:50.383Z] 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:09.747 "name": "raid_bdev1", 00:39:09.747 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:09.747 "strip_size_kb": 0, 00:39:09.747 "state": "online", 00:39:09.747 "raid_level": "raid1", 00:39:09.747 "superblock": true, 00:39:09.747 "num_base_bdevs": 4, 00:39:09.747 "num_base_bdevs_discovered": 3, 00:39:09.747 "num_base_bdevs_operational": 3, 00:39:09.747 "base_bdevs_list": [ 00:39:09.747 { 00:39:09.747 "name": null, 00:39:09.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:09.747 "is_configured": false, 00:39:09.747 "data_offset": 0, 00:39:09.747 "data_size": 63488 00:39:09.747 }, 00:39:09.747 { 00:39:09.747 "name": "BaseBdev2", 00:39:09.747 "uuid": "1d946f3d-49bb-5db8-8721-4dfefe74cabf", 00:39:09.747 "is_configured": true, 00:39:09.747 "data_offset": 2048, 00:39:09.747 "data_size": 63488 00:39:09.747 }, 00:39:09.747 { 00:39:09.747 "name": "BaseBdev3", 00:39:09.747 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:09.747 "is_configured": true, 00:39:09.747 "data_offset": 2048, 00:39:09.747 "data_size": 63488 00:39:09.747 }, 00:39:09.747 { 00:39:09.747 "name": "BaseBdev4", 00:39:09.747 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:09.747 "is_configured": true, 00:39:09.747 "data_offset": 2048, 00:39:09.747 "data_size": 63488 00:39:09.747 } 00:39:09.747 ] 00:39:09.747 }' 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:09.747 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:10.007 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:10.007 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:10.007 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.007 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:10.007 [2024-12-09 23:20:50.420348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:10.007 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.007 23:20:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:39:10.007 [2024-12-09 23:20:50.489128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:39:10.007 [2024-12-09 23:20:50.491546] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:10.007 [2024-12-09 23:20:50.614031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:10.267 [2024-12-09 23:20:50.762018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:10.267 [2024-12-09 23:20:50.762555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:10.526 [2024-12-09 23:20:51.013310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:10.526 [2024-12-09 23:20:51.014966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:10.785 151.67 IOPS, 455.00 MiB/s [2024-12-09T23:20:51.421Z] [2024-12-09 23:20:51.225140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:10.785 [2024-12-09 23:20:51.226126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:11.044 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:11.044 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:11.044 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:11.044 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:11.044 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:11.045 "name": "raid_bdev1", 00:39:11.045 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:11.045 "strip_size_kb": 0, 00:39:11.045 "state": "online", 00:39:11.045 "raid_level": "raid1", 00:39:11.045 "superblock": true, 00:39:11.045 "num_base_bdevs": 4, 00:39:11.045 "num_base_bdevs_discovered": 4, 00:39:11.045 "num_base_bdevs_operational": 4, 00:39:11.045 "process": { 00:39:11.045 "type": "rebuild", 00:39:11.045 "target": "spare", 00:39:11.045 "progress": { 00:39:11.045 "blocks": 12288, 00:39:11.045 "percent": 19 00:39:11.045 } 00:39:11.045 }, 00:39:11.045 "base_bdevs_list": [ 00:39:11.045 { 00:39:11.045 "name": "spare", 00:39:11.045 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:11.045 "is_configured": true, 00:39:11.045 "data_offset": 2048, 00:39:11.045 "data_size": 63488 00:39:11.045 }, 00:39:11.045 { 00:39:11.045 "name": "BaseBdev2", 00:39:11.045 "uuid": "1d946f3d-49bb-5db8-8721-4dfefe74cabf", 00:39:11.045 "is_configured": true, 00:39:11.045 "data_offset": 2048, 00:39:11.045 "data_size": 63488 00:39:11.045 }, 00:39:11.045 { 00:39:11.045 "name": "BaseBdev3", 00:39:11.045 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:11.045 "is_configured": true, 00:39:11.045 "data_offset": 2048, 00:39:11.045 "data_size": 63488 00:39:11.045 }, 00:39:11.045 { 00:39:11.045 "name": "BaseBdev4", 00:39:11.045 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:11.045 "is_configured": true, 00:39:11.045 "data_offset": 2048, 00:39:11.045 "data_size": 63488 00:39:11.045 } 00:39:11.045 ] 00:39:11.045 }' 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:11.045 [2024-12-09 23:20:51.562589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:39:11.045 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.045 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:11.045 [2024-12-09 23:20:51.594087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:11.305 [2024-12-09 23:20:51.845722] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:39:11.305 [2024-12-09 23:20:51.845786] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:11.305 "name": "raid_bdev1", 00:39:11.305 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:11.305 "strip_size_kb": 0, 00:39:11.305 "state": "online", 00:39:11.305 "raid_level": "raid1", 00:39:11.305 "superblock": true, 00:39:11.305 "num_base_bdevs": 4, 00:39:11.305 "num_base_bdevs_discovered": 3, 00:39:11.305 "num_base_bdevs_operational": 3, 00:39:11.305 "process": { 00:39:11.305 "type": "rebuild", 00:39:11.305 "target": "spare", 00:39:11.305 "progress": { 00:39:11.305 "blocks": 16384, 00:39:11.305 "percent": 25 00:39:11.305 } 00:39:11.305 }, 00:39:11.305 "base_bdevs_list": [ 00:39:11.305 { 00:39:11.305 "name": "spare", 00:39:11.305 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:11.305 "is_configured": true, 00:39:11.305 "data_offset": 2048, 00:39:11.305 "data_size": 63488 00:39:11.305 }, 00:39:11.305 { 00:39:11.305 "name": null, 00:39:11.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:11.305 "is_configured": false, 00:39:11.305 "data_offset": 0, 00:39:11.305 "data_size": 63488 00:39:11.305 }, 00:39:11.305 { 00:39:11.305 "name": "BaseBdev3", 00:39:11.305 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:11.305 "is_configured": true, 00:39:11.305 "data_offset": 2048, 00:39:11.305 "data_size": 63488 00:39:11.305 }, 00:39:11.305 { 00:39:11.305 "name": "BaseBdev4", 00:39:11.305 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:11.305 "is_configured": true, 00:39:11.305 "data_offset": 2048, 00:39:11.305 "data_size": 63488 00:39:11.305 } 00:39:11.305 ] 00:39:11.305 }' 00:39:11.305 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:11.565 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:11.565 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:11.565 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:11.565 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=501 00:39:11.565 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:11.565 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:11.565 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:11.565 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:11.565 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:11.565 23:20:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:11.565 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:11.565 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:11.565 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.565 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:11.565 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.565 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:11.565 "name": "raid_bdev1", 00:39:11.565 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:11.565 "strip_size_kb": 0, 00:39:11.565 "state": "online", 00:39:11.565 "raid_level": "raid1", 00:39:11.565 "superblock": true, 00:39:11.565 "num_base_bdevs": 4, 00:39:11.565 "num_base_bdevs_discovered": 3, 00:39:11.565 "num_base_bdevs_operational": 3, 00:39:11.565 "process": { 00:39:11.565 "type": "rebuild", 00:39:11.565 "target": "spare", 00:39:11.565 "progress": { 00:39:11.565 "blocks": 18432, 00:39:11.565 "percent": 29 00:39:11.565 } 00:39:11.565 }, 00:39:11.565 "base_bdevs_list": [ 00:39:11.565 { 00:39:11.565 "name": "spare", 00:39:11.565 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:11.565 "is_configured": true, 00:39:11.565 "data_offset": 2048, 00:39:11.565 "data_size": 63488 00:39:11.565 }, 00:39:11.565 { 00:39:11.565 "name": null, 00:39:11.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:11.565 "is_configured": false, 00:39:11.565 "data_offset": 0, 00:39:11.565 "data_size": 63488 00:39:11.565 }, 00:39:11.565 { 00:39:11.565 "name": "BaseBdev3", 00:39:11.565 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:11.565 "is_configured": true, 00:39:11.565 "data_offset": 2048, 00:39:11.565 "data_size": 63488 00:39:11.565 }, 00:39:11.565 { 00:39:11.565 "name": "BaseBdev4", 00:39:11.565 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:11.565 "is_configured": true, 00:39:11.565 "data_offset": 2048, 00:39:11.565 "data_size": 63488 00:39:11.565 } 00:39:11.565 ] 00:39:11.565 }' 00:39:11.565 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:11.565 133.25 IOPS, 399.75 MiB/s [2024-12-09T23:20:52.201Z] 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:11.565 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:11.565 [2024-12-09 23:20:52.099159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:39:11.565 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:11.565 23:20:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:12.134 [2024-12-09 23:20:52.519429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:39:12.134 [2024-12-09 23:20:52.632935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:39:12.703 117.40 IOPS, 352.20 MiB/s [2024-12-09T23:20:53.339Z] [2024-12-09 23:20:53.045254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.703 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:12.703 "name": "raid_bdev1", 00:39:12.703 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:12.703 "strip_size_kb": 0, 00:39:12.703 "state": "online", 00:39:12.703 "raid_level": "raid1", 00:39:12.703 "superblock": true, 00:39:12.703 "num_base_bdevs": 4, 00:39:12.703 "num_base_bdevs_discovered": 3, 00:39:12.703 "num_base_bdevs_operational": 3, 00:39:12.703 "process": { 00:39:12.703 "type": "rebuild", 00:39:12.703 "target": "spare", 00:39:12.703 "progress": { 00:39:12.703 "blocks": 34816, 00:39:12.703 "percent": 54 00:39:12.703 } 00:39:12.703 }, 00:39:12.703 "base_bdevs_list": [ 00:39:12.703 { 00:39:12.703 "name": "spare", 00:39:12.703 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:12.703 "is_configured": true, 00:39:12.703 "data_offset": 2048, 00:39:12.703 "data_size": 63488 00:39:12.703 }, 00:39:12.703 { 00:39:12.703 "name": null, 00:39:12.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:12.703 "is_configured": false, 00:39:12.703 "data_offset": 0, 00:39:12.703 "data_size": 63488 00:39:12.703 }, 00:39:12.703 { 00:39:12.703 "name": "BaseBdev3", 00:39:12.703 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:12.703 "is_configured": true, 00:39:12.703 "data_offset": 2048, 00:39:12.703 "data_size": 63488 00:39:12.703 }, 00:39:12.703 { 00:39:12.703 "name": "BaseBdev4", 00:39:12.703 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:12.703 "is_configured": true, 00:39:12.703 "data_offset": 2048, 00:39:12.703 "data_size": 63488 00:39:12.703 } 00:39:12.703 ] 00:39:12.703 }' 00:39:12.704 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:12.704 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:12.704 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:12.704 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:12.704 23:20:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:13.646 [2024-12-09 23:20:54.035708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:39:13.646 105.67 IOPS, 317.00 MiB/s [2024-12-09T23:20:54.282Z] [2024-12-09 23:20:54.263275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:39:13.646 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:13.646 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:13.646 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:13.646 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:13.646 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:13.646 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:13.646 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:13.646 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.646 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:13.646 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:13.905 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.905 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:13.905 "name": "raid_bdev1", 00:39:13.905 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:13.905 "strip_size_kb": 0, 00:39:13.905 "state": "online", 00:39:13.905 "raid_level": "raid1", 00:39:13.905 "superblock": true, 00:39:13.905 "num_base_bdevs": 4, 00:39:13.905 "num_base_bdevs_discovered": 3, 00:39:13.905 "num_base_bdevs_operational": 3, 00:39:13.905 "process": { 00:39:13.905 "type": "rebuild", 00:39:13.905 "target": "spare", 00:39:13.905 "progress": { 00:39:13.905 "blocks": 57344, 00:39:13.905 "percent": 90 00:39:13.905 } 00:39:13.905 }, 00:39:13.905 "base_bdevs_list": [ 00:39:13.905 { 00:39:13.905 "name": "spare", 00:39:13.905 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:13.905 "is_configured": true, 00:39:13.905 "data_offset": 2048, 00:39:13.905 "data_size": 63488 00:39:13.905 }, 00:39:13.905 { 00:39:13.905 "name": null, 00:39:13.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:13.905 "is_configured": false, 00:39:13.905 "data_offset": 0, 00:39:13.905 "data_size": 63488 00:39:13.905 }, 00:39:13.905 { 00:39:13.905 "name": "BaseBdev3", 00:39:13.905 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:13.905 "is_configured": true, 00:39:13.905 "data_offset": 2048, 00:39:13.905 "data_size": 63488 00:39:13.905 }, 00:39:13.905 { 00:39:13.905 "name": "BaseBdev4", 00:39:13.905 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:13.905 "is_configured": true, 00:39:13.905 "data_offset": 2048, 00:39:13.905 "data_size": 63488 00:39:13.905 } 00:39:13.905 ] 00:39:13.905 }' 00:39:13.905 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:13.905 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:13.905 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:13.905 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:13.905 23:20:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:13.905 [2024-12-09 23:20:54.484814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:39:13.905 [2024-12-09 23:20:54.485151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:39:14.474 [2024-12-09 23:20:54.799661] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:14.474 [2024-12-09 23:20:54.899526] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:14.474 [2024-12-09 23:20:54.901687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:15.041 97.29 IOPS, 291.86 MiB/s [2024-12-09T23:20:55.677Z] 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:15.041 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:15.041 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:15.041 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:15.041 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:15.041 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:15.041 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.041 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:15.041 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.041 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:15.042 "name": "raid_bdev1", 00:39:15.042 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:15.042 "strip_size_kb": 0, 00:39:15.042 "state": "online", 00:39:15.042 "raid_level": "raid1", 00:39:15.042 "superblock": true, 00:39:15.042 "num_base_bdevs": 4, 00:39:15.042 "num_base_bdevs_discovered": 3, 00:39:15.042 "num_base_bdevs_operational": 3, 00:39:15.042 "base_bdevs_list": [ 00:39:15.042 { 00:39:15.042 "name": "spare", 00:39:15.042 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:15.042 "is_configured": true, 00:39:15.042 "data_offset": 2048, 00:39:15.042 "data_size": 63488 00:39:15.042 }, 00:39:15.042 { 00:39:15.042 "name": null, 00:39:15.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:15.042 "is_configured": false, 00:39:15.042 "data_offset": 0, 00:39:15.042 "data_size": 63488 00:39:15.042 }, 00:39:15.042 { 00:39:15.042 "name": "BaseBdev3", 00:39:15.042 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:15.042 "is_configured": true, 00:39:15.042 "data_offset": 2048, 00:39:15.042 "data_size": 63488 00:39:15.042 }, 00:39:15.042 { 00:39:15.042 "name": "BaseBdev4", 00:39:15.042 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:15.042 "is_configured": true, 00:39:15.042 "data_offset": 2048, 00:39:15.042 "data_size": 63488 00:39:15.042 } 00:39:15.042 ] 00:39:15.042 }' 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:15.042 "name": "raid_bdev1", 00:39:15.042 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:15.042 "strip_size_kb": 0, 00:39:15.042 "state": "online", 00:39:15.042 "raid_level": "raid1", 00:39:15.042 "superblock": true, 00:39:15.042 "num_base_bdevs": 4, 00:39:15.042 "num_base_bdevs_discovered": 3, 00:39:15.042 "num_base_bdevs_operational": 3, 00:39:15.042 "base_bdevs_list": [ 00:39:15.042 { 00:39:15.042 "name": "spare", 00:39:15.042 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:15.042 "is_configured": true, 00:39:15.042 "data_offset": 2048, 00:39:15.042 "data_size": 63488 00:39:15.042 }, 00:39:15.042 { 00:39:15.042 "name": null, 00:39:15.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:15.042 "is_configured": false, 00:39:15.042 "data_offset": 0, 00:39:15.042 "data_size": 63488 00:39:15.042 }, 00:39:15.042 { 00:39:15.042 "name": "BaseBdev3", 00:39:15.042 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:15.042 "is_configured": true, 00:39:15.042 "data_offset": 2048, 00:39:15.042 "data_size": 63488 00:39:15.042 }, 00:39:15.042 { 00:39:15.042 "name": "BaseBdev4", 00:39:15.042 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:15.042 "is_configured": true, 00:39:15.042 "data_offset": 2048, 00:39:15.042 "data_size": 63488 00:39:15.042 } 00:39:15.042 ] 00:39:15.042 }' 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:15.042 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:15.301 "name": "raid_bdev1", 00:39:15.301 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:15.301 "strip_size_kb": 0, 00:39:15.301 "state": "online", 00:39:15.301 "raid_level": "raid1", 00:39:15.301 "superblock": true, 00:39:15.301 "num_base_bdevs": 4, 00:39:15.301 "num_base_bdevs_discovered": 3, 00:39:15.301 "num_base_bdevs_operational": 3, 00:39:15.301 "base_bdevs_list": [ 00:39:15.301 { 00:39:15.301 "name": "spare", 00:39:15.301 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:15.301 "is_configured": true, 00:39:15.301 "data_offset": 2048, 00:39:15.301 "data_size": 63488 00:39:15.301 }, 00:39:15.301 { 00:39:15.301 "name": null, 00:39:15.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:15.301 "is_configured": false, 00:39:15.301 "data_offset": 0, 00:39:15.301 "data_size": 63488 00:39:15.301 }, 00:39:15.301 { 00:39:15.301 "name": "BaseBdev3", 00:39:15.301 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:15.301 "is_configured": true, 00:39:15.301 "data_offset": 2048, 00:39:15.301 "data_size": 63488 00:39:15.301 }, 00:39:15.301 { 00:39:15.301 "name": "BaseBdev4", 00:39:15.301 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:15.301 "is_configured": true, 00:39:15.301 "data_offset": 2048, 00:39:15.301 "data_size": 63488 00:39:15.301 } 00:39:15.301 ] 00:39:15.301 }' 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:15.301 23:20:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:15.570 89.38 IOPS, 268.12 MiB/s [2024-12-09T23:20:56.206Z] 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:15.570 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.570 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:15.570 [2024-12-09 23:20:56.155468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:15.570 [2024-12-09 23:20:56.155646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:15.831 00:39:15.831 Latency(us) 00:39:15.831 [2024-12-09T23:20:56.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:15.831 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:39:15.831 raid_bdev1 : 8.19 87.90 263.69 0.00 0.00 15941.61 312.55 110332.09 00:39:15.831 [2024-12-09T23:20:56.467Z] =================================================================================================================== 00:39:15.831 [2024-12-09T23:20:56.467Z] Total : 87.90 263.69 0.00 0.00 15941.61 312.55 110332.09 00:39:15.831 [2024-12-09 23:20:56.251336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:15.831 { 00:39:15.831 "results": [ 00:39:15.831 { 00:39:15.831 "job": "raid_bdev1", 00:39:15.831 "core_mask": "0x1", 00:39:15.831 "workload": "randrw", 00:39:15.831 "percentage": 50, 00:39:15.831 "status": "finished", 00:39:15.831 "queue_depth": 2, 00:39:15.831 "io_size": 3145728, 00:39:15.831 "runtime": 8.191466, 00:39:15.831 "iops": 87.89635457194109, 00:39:15.831 "mibps": 263.6890637158233, 00:39:15.831 "io_failed": 0, 00:39:15.831 "io_timeout": 0, 00:39:15.831 "avg_latency_us": 15941.61120928157, 00:39:15.831 "min_latency_us": 312.54618473895584, 00:39:15.831 "max_latency_us": 110332.09317269076 00:39:15.831 } 00:39:15.831 ], 00:39:15.831 "core_count": 1 00:39:15.831 } 00:39:15.831 [2024-12-09 23:20:56.251584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:15.831 [2024-12-09 23:20:56.251709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:15.831 [2024-12-09 23:20:56.251724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:15.831 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:39:16.090 /dev/nbd0 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:16.090 1+0 records in 00:39:16.090 1+0 records out 00:39:16.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405102 s, 10.1 MB/s 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:39:16.090 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:16.091 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:39:16.351 /dev/nbd1 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:16.351 1+0 records in 00:39:16.351 1+0 records out 00:39:16.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292582 s, 14.0 MB/s 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:16.351 23:20:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:16.643 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:39:16.643 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:16.643 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:39:16.643 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:16.643 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:39:16.643 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:16.643 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:16.643 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:16.643 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:16.644 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:39:16.904 /dev/nbd1 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:16.904 1+0 records in 00:39:16.904 1+0 records out 00:39:16.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295075 s, 13.9 MB/s 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:16.904 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:17.164 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:39:17.164 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:17.164 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:39:17.164 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:17.164 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:39:17.164 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:17.164 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:17.423 23:20:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.683 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:17.684 [2024-12-09 23:20:58.101374] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:17.684 [2024-12-09 23:20:58.101449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:17.684 [2024-12-09 23:20:58.101476] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:39:17.684 [2024-12-09 23:20:58.101488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:17.684 [2024-12-09 23:20:58.104066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:17.684 [2024-12-09 23:20:58.104108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:17.684 [2024-12-09 23:20:58.104211] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:17.684 [2024-12-09 23:20:58.104264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:17.684 [2024-12-09 23:20:58.104416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:17.684 [2024-12-09 23:20:58.104512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:17.684 spare 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:17.684 [2024-12-09 23:20:58.204460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:39:17.684 [2024-12-09 23:20:58.204516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:17.684 [2024-12-09 23:20:58.204914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:39:17.684 [2024-12-09 23:20:58.205127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:39:17.684 [2024-12-09 23:20:58.205141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:39:17.684 [2024-12-09 23:20:58.205363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:17.684 "name": "raid_bdev1", 00:39:17.684 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:17.684 "strip_size_kb": 0, 00:39:17.684 "state": "online", 00:39:17.684 "raid_level": "raid1", 00:39:17.684 "superblock": true, 00:39:17.684 "num_base_bdevs": 4, 00:39:17.684 "num_base_bdevs_discovered": 3, 00:39:17.684 "num_base_bdevs_operational": 3, 00:39:17.684 "base_bdevs_list": [ 00:39:17.684 { 00:39:17.684 "name": "spare", 00:39:17.684 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:17.684 "is_configured": true, 00:39:17.684 "data_offset": 2048, 00:39:17.684 "data_size": 63488 00:39:17.684 }, 00:39:17.684 { 00:39:17.684 "name": null, 00:39:17.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:17.684 "is_configured": false, 00:39:17.684 "data_offset": 2048, 00:39:17.684 "data_size": 63488 00:39:17.684 }, 00:39:17.684 { 00:39:17.684 "name": "BaseBdev3", 00:39:17.684 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:17.684 "is_configured": true, 00:39:17.684 "data_offset": 2048, 00:39:17.684 "data_size": 63488 00:39:17.684 }, 00:39:17.684 { 00:39:17.684 "name": "BaseBdev4", 00:39:17.684 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:17.684 "is_configured": true, 00:39:17.684 "data_offset": 2048, 00:39:17.684 "data_size": 63488 00:39:17.684 } 00:39:17.684 ] 00:39:17.684 }' 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:17.684 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:18.255 "name": "raid_bdev1", 00:39:18.255 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:18.255 "strip_size_kb": 0, 00:39:18.255 "state": "online", 00:39:18.255 "raid_level": "raid1", 00:39:18.255 "superblock": true, 00:39:18.255 "num_base_bdevs": 4, 00:39:18.255 "num_base_bdevs_discovered": 3, 00:39:18.255 "num_base_bdevs_operational": 3, 00:39:18.255 "base_bdevs_list": [ 00:39:18.255 { 00:39:18.255 "name": "spare", 00:39:18.255 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:18.255 "is_configured": true, 00:39:18.255 "data_offset": 2048, 00:39:18.255 "data_size": 63488 00:39:18.255 }, 00:39:18.255 { 00:39:18.255 "name": null, 00:39:18.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.255 "is_configured": false, 00:39:18.255 "data_offset": 2048, 00:39:18.255 "data_size": 63488 00:39:18.255 }, 00:39:18.255 { 00:39:18.255 "name": "BaseBdev3", 00:39:18.255 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:18.255 "is_configured": true, 00:39:18.255 "data_offset": 2048, 00:39:18.255 "data_size": 63488 00:39:18.255 }, 00:39:18.255 { 00:39:18.255 "name": "BaseBdev4", 00:39:18.255 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:18.255 "is_configured": true, 00:39:18.255 "data_offset": 2048, 00:39:18.255 "data_size": 63488 00:39:18.255 } 00:39:18.255 ] 00:39:18.255 }' 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:18.255 [2024-12-09 23:20:58.848592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:18.255 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.515 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:18.515 "name": "raid_bdev1", 00:39:18.515 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:18.515 "strip_size_kb": 0, 00:39:18.515 "state": "online", 00:39:18.515 "raid_level": "raid1", 00:39:18.515 "superblock": true, 00:39:18.515 "num_base_bdevs": 4, 00:39:18.515 "num_base_bdevs_discovered": 2, 00:39:18.515 "num_base_bdevs_operational": 2, 00:39:18.515 "base_bdevs_list": [ 00:39:18.515 { 00:39:18.515 "name": null, 00:39:18.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.515 "is_configured": false, 00:39:18.515 "data_offset": 0, 00:39:18.515 "data_size": 63488 00:39:18.515 }, 00:39:18.515 { 00:39:18.515 "name": null, 00:39:18.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.515 "is_configured": false, 00:39:18.515 "data_offset": 2048, 00:39:18.515 "data_size": 63488 00:39:18.515 }, 00:39:18.515 { 00:39:18.515 "name": "BaseBdev3", 00:39:18.515 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:18.515 "is_configured": true, 00:39:18.515 "data_offset": 2048, 00:39:18.515 "data_size": 63488 00:39:18.515 }, 00:39:18.515 { 00:39:18.515 "name": "BaseBdev4", 00:39:18.515 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:18.515 "is_configured": true, 00:39:18.515 "data_offset": 2048, 00:39:18.515 "data_size": 63488 00:39:18.515 } 00:39:18.515 ] 00:39:18.515 }' 00:39:18.515 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:18.515 23:20:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:18.775 23:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:18.775 23:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.775 23:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:18.775 [2024-12-09 23:20:59.264028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:18.775 [2024-12-09 23:20:59.264386] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:39:18.775 [2024-12-09 23:20:59.264423] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:18.775 [2024-12-09 23:20:59.264478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:18.775 [2024-12-09 23:20:59.279437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:39:18.775 23:20:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.775 [2024-12-09 23:20:59.281635] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:18.775 23:20:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.728 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:19.728 "name": "raid_bdev1", 00:39:19.728 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:19.728 "strip_size_kb": 0, 00:39:19.728 "state": "online", 00:39:19.728 "raid_level": "raid1", 00:39:19.728 "superblock": true, 00:39:19.728 "num_base_bdevs": 4, 00:39:19.728 "num_base_bdevs_discovered": 3, 00:39:19.728 "num_base_bdevs_operational": 3, 00:39:19.728 "process": { 00:39:19.728 "type": "rebuild", 00:39:19.728 "target": "spare", 00:39:19.728 "progress": { 00:39:19.728 "blocks": 20480, 00:39:19.728 "percent": 32 00:39:19.728 } 00:39:19.728 }, 00:39:19.728 "base_bdevs_list": [ 00:39:19.728 { 00:39:19.728 "name": "spare", 00:39:19.728 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:19.728 "is_configured": true, 00:39:19.728 "data_offset": 2048, 00:39:19.728 "data_size": 63488 00:39:19.728 }, 00:39:19.728 { 00:39:19.728 "name": null, 00:39:19.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:19.729 "is_configured": false, 00:39:19.729 "data_offset": 2048, 00:39:19.729 "data_size": 63488 00:39:19.729 }, 00:39:19.729 { 00:39:19.729 "name": "BaseBdev3", 00:39:19.729 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:19.729 "is_configured": true, 00:39:19.729 "data_offset": 2048, 00:39:19.729 "data_size": 63488 00:39:19.729 }, 00:39:19.729 { 00:39:19.729 "name": "BaseBdev4", 00:39:19.729 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:19.729 "is_configured": true, 00:39:19.729 "data_offset": 2048, 00:39:19.729 "data_size": 63488 00:39:19.729 } 00:39:19.729 ] 00:39:19.729 }' 00:39:19.729 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:19.988 [2024-12-09 23:21:00.421419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:19.988 [2024-12-09 23:21:00.487492] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:19.988 [2024-12-09 23:21:00.487586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:19.988 [2024-12-09 23:21:00.487611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:19.988 [2024-12-09 23:21:00.487621] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.988 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:19.988 "name": "raid_bdev1", 00:39:19.988 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:19.988 "strip_size_kb": 0, 00:39:19.988 "state": "online", 00:39:19.988 "raid_level": "raid1", 00:39:19.988 "superblock": true, 00:39:19.988 "num_base_bdevs": 4, 00:39:19.988 "num_base_bdevs_discovered": 2, 00:39:19.988 "num_base_bdevs_operational": 2, 00:39:19.988 "base_bdevs_list": [ 00:39:19.988 { 00:39:19.988 "name": null, 00:39:19.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:19.988 "is_configured": false, 00:39:19.988 "data_offset": 0, 00:39:19.988 "data_size": 63488 00:39:19.988 }, 00:39:19.988 { 00:39:19.988 "name": null, 00:39:19.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:19.988 "is_configured": false, 00:39:19.988 "data_offset": 2048, 00:39:19.988 "data_size": 63488 00:39:19.988 }, 00:39:19.988 { 00:39:19.988 "name": "BaseBdev3", 00:39:19.988 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:19.988 "is_configured": true, 00:39:19.988 "data_offset": 2048, 00:39:19.989 "data_size": 63488 00:39:19.989 }, 00:39:19.989 { 00:39:19.989 "name": "BaseBdev4", 00:39:19.989 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:19.989 "is_configured": true, 00:39:19.989 "data_offset": 2048, 00:39:19.989 "data_size": 63488 00:39:19.989 } 00:39:19.989 ] 00:39:19.989 }' 00:39:19.989 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:19.989 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:20.558 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:20.558 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.558 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:20.558 [2024-12-09 23:21:00.977124] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:20.558 [2024-12-09 23:21:00.977201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:20.558 [2024-12-09 23:21:00.977235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:39:20.558 [2024-12-09 23:21:00.977248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:20.558 [2024-12-09 23:21:00.977780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:20.558 [2024-12-09 23:21:00.977817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:20.558 [2024-12-09 23:21:00.977924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:20.558 [2024-12-09 23:21:00.977939] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:39:20.558 [2024-12-09 23:21:00.977957] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:20.558 [2024-12-09 23:21:00.977986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:20.558 [2024-12-09 23:21:00.993562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:39:20.558 spare 00:39:20.558 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.558 23:21:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:39:20.558 [2024-12-09 23:21:00.995825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:21.496 23:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:21.496 23:21:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:21.496 "name": "raid_bdev1", 00:39:21.496 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:21.496 "strip_size_kb": 0, 00:39:21.496 "state": "online", 00:39:21.496 "raid_level": "raid1", 00:39:21.496 "superblock": true, 00:39:21.496 "num_base_bdevs": 4, 00:39:21.496 "num_base_bdevs_discovered": 3, 00:39:21.496 "num_base_bdevs_operational": 3, 00:39:21.496 "process": { 00:39:21.496 "type": "rebuild", 00:39:21.496 "target": "spare", 00:39:21.496 "progress": { 00:39:21.496 "blocks": 20480, 00:39:21.496 "percent": 32 00:39:21.496 } 00:39:21.496 }, 00:39:21.496 "base_bdevs_list": [ 00:39:21.496 { 00:39:21.496 "name": "spare", 00:39:21.496 "uuid": "24483110-9b4c-54c7-b14a-e660ca580c27", 00:39:21.496 "is_configured": true, 00:39:21.496 "data_offset": 2048, 00:39:21.496 "data_size": 63488 00:39:21.496 }, 00:39:21.496 { 00:39:21.496 "name": null, 00:39:21.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:21.496 "is_configured": false, 00:39:21.496 "data_offset": 2048, 00:39:21.496 "data_size": 63488 00:39:21.496 }, 00:39:21.496 { 00:39:21.496 "name": "BaseBdev3", 00:39:21.496 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:21.496 "is_configured": true, 00:39:21.496 "data_offset": 2048, 00:39:21.496 "data_size": 63488 00:39:21.496 }, 00:39:21.496 { 00:39:21.496 "name": "BaseBdev4", 00:39:21.496 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:21.496 "is_configured": true, 00:39:21.496 "data_offset": 2048, 00:39:21.496 "data_size": 63488 00:39:21.496 } 00:39:21.496 ] 00:39:21.496 }' 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:21.496 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:21.755 [2024-12-09 23:21:02.143578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:21.755 [2024-12-09 23:21:02.201631] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:21.755 [2024-12-09 23:21:02.201739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:21.755 [2024-12-09 23:21:02.201757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:21.755 [2024-12-09 23:21:02.201770] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:21.755 "name": "raid_bdev1", 00:39:21.755 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:21.755 "strip_size_kb": 0, 00:39:21.755 "state": "online", 00:39:21.755 "raid_level": "raid1", 00:39:21.755 "superblock": true, 00:39:21.755 "num_base_bdevs": 4, 00:39:21.755 "num_base_bdevs_discovered": 2, 00:39:21.755 "num_base_bdevs_operational": 2, 00:39:21.755 "base_bdevs_list": [ 00:39:21.755 { 00:39:21.755 "name": null, 00:39:21.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:21.755 "is_configured": false, 00:39:21.755 "data_offset": 0, 00:39:21.755 "data_size": 63488 00:39:21.755 }, 00:39:21.755 { 00:39:21.755 "name": null, 00:39:21.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:21.755 "is_configured": false, 00:39:21.755 "data_offset": 2048, 00:39:21.755 "data_size": 63488 00:39:21.755 }, 00:39:21.755 { 00:39:21.755 "name": "BaseBdev3", 00:39:21.755 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:21.755 "is_configured": true, 00:39:21.755 "data_offset": 2048, 00:39:21.755 "data_size": 63488 00:39:21.755 }, 00:39:21.755 { 00:39:21.755 "name": "BaseBdev4", 00:39:21.755 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:21.755 "is_configured": true, 00:39:21.755 "data_offset": 2048, 00:39:21.755 "data_size": 63488 00:39:21.755 } 00:39:21.755 ] 00:39:21.755 }' 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:21.755 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:22.016 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:22.016 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:22.016 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:22.016 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:22.016 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:22.016 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:22.016 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:22.016 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.016 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:22.276 "name": "raid_bdev1", 00:39:22.276 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:22.276 "strip_size_kb": 0, 00:39:22.276 "state": "online", 00:39:22.276 "raid_level": "raid1", 00:39:22.276 "superblock": true, 00:39:22.276 "num_base_bdevs": 4, 00:39:22.276 "num_base_bdevs_discovered": 2, 00:39:22.276 "num_base_bdevs_operational": 2, 00:39:22.276 "base_bdevs_list": [ 00:39:22.276 { 00:39:22.276 "name": null, 00:39:22.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:22.276 "is_configured": false, 00:39:22.276 "data_offset": 0, 00:39:22.276 "data_size": 63488 00:39:22.276 }, 00:39:22.276 { 00:39:22.276 "name": null, 00:39:22.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:22.276 "is_configured": false, 00:39:22.276 "data_offset": 2048, 00:39:22.276 "data_size": 63488 00:39:22.276 }, 00:39:22.276 { 00:39:22.276 "name": "BaseBdev3", 00:39:22.276 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:22.276 "is_configured": true, 00:39:22.276 "data_offset": 2048, 00:39:22.276 "data_size": 63488 00:39:22.276 }, 00:39:22.276 { 00:39:22.276 "name": "BaseBdev4", 00:39:22.276 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:22.276 "is_configured": true, 00:39:22.276 "data_offset": 2048, 00:39:22.276 "data_size": 63488 00:39:22.276 } 00:39:22.276 ] 00:39:22.276 }' 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:22.276 [2024-12-09 23:21:02.791248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:22.276 [2024-12-09 23:21:02.791319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:22.276 [2024-12-09 23:21:02.791344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:39:22.276 [2024-12-09 23:21:02.791358] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:22.276 [2024-12-09 23:21:02.791841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:22.276 [2024-12-09 23:21:02.791871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:22.276 [2024-12-09 23:21:02.791959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:22.276 [2024-12-09 23:21:02.791978] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:39:22.276 [2024-12-09 23:21:02.791993] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:22.276 [2024-12-09 23:21:02.792007] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:39:22.276 BaseBdev1 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.276 23:21:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:23.214 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.473 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:23.473 "name": "raid_bdev1", 00:39:23.473 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:23.473 "strip_size_kb": 0, 00:39:23.473 "state": "online", 00:39:23.473 "raid_level": "raid1", 00:39:23.473 "superblock": true, 00:39:23.473 "num_base_bdevs": 4, 00:39:23.473 "num_base_bdevs_discovered": 2, 00:39:23.473 "num_base_bdevs_operational": 2, 00:39:23.473 "base_bdevs_list": [ 00:39:23.473 { 00:39:23.473 "name": null, 00:39:23.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:23.473 "is_configured": false, 00:39:23.473 "data_offset": 0, 00:39:23.473 "data_size": 63488 00:39:23.473 }, 00:39:23.473 { 00:39:23.473 "name": null, 00:39:23.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:23.473 "is_configured": false, 00:39:23.473 "data_offset": 2048, 00:39:23.473 "data_size": 63488 00:39:23.473 }, 00:39:23.473 { 00:39:23.473 "name": "BaseBdev3", 00:39:23.473 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:23.473 "is_configured": true, 00:39:23.473 "data_offset": 2048, 00:39:23.473 "data_size": 63488 00:39:23.473 }, 00:39:23.473 { 00:39:23.473 "name": "BaseBdev4", 00:39:23.473 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:23.473 "is_configured": true, 00:39:23.473 "data_offset": 2048, 00:39:23.473 "data_size": 63488 00:39:23.473 } 00:39:23.473 ] 00:39:23.473 }' 00:39:23.473 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:23.473 23:21:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:23.733 "name": "raid_bdev1", 00:39:23.733 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:23.733 "strip_size_kb": 0, 00:39:23.733 "state": "online", 00:39:23.733 "raid_level": "raid1", 00:39:23.733 "superblock": true, 00:39:23.733 "num_base_bdevs": 4, 00:39:23.733 "num_base_bdevs_discovered": 2, 00:39:23.733 "num_base_bdevs_operational": 2, 00:39:23.733 "base_bdevs_list": [ 00:39:23.733 { 00:39:23.733 "name": null, 00:39:23.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:23.733 "is_configured": false, 00:39:23.733 "data_offset": 0, 00:39:23.733 "data_size": 63488 00:39:23.733 }, 00:39:23.733 { 00:39:23.733 "name": null, 00:39:23.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:23.733 "is_configured": false, 00:39:23.733 "data_offset": 2048, 00:39:23.733 "data_size": 63488 00:39:23.733 }, 00:39:23.733 { 00:39:23.733 "name": "BaseBdev3", 00:39:23.733 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:23.733 "is_configured": true, 00:39:23.733 "data_offset": 2048, 00:39:23.733 "data_size": 63488 00:39:23.733 }, 00:39:23.733 { 00:39:23.733 "name": "BaseBdev4", 00:39:23.733 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:23.733 "is_configured": true, 00:39:23.733 "data_offset": 2048, 00:39:23.733 "data_size": 63488 00:39:23.733 } 00:39:23.733 ] 00:39:23.733 }' 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:23.733 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:23.993 [2024-12-09 23:21:04.386554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:23.993 [2024-12-09 23:21:04.386861] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:39:23.993 [2024-12-09 23:21:04.386885] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:23.993 request: 00:39:23.993 { 00:39:23.993 "base_bdev": "BaseBdev1", 00:39:23.993 "raid_bdev": "raid_bdev1", 00:39:23.993 "method": "bdev_raid_add_base_bdev", 00:39:23.993 "req_id": 1 00:39:23.993 } 00:39:23.993 Got JSON-RPC error response 00:39:23.993 response: 00:39:23.993 { 00:39:23.993 "code": -22, 00:39:23.993 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:23.993 } 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:23.993 23:21:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:25.005 "name": "raid_bdev1", 00:39:25.005 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:25.005 "strip_size_kb": 0, 00:39:25.005 "state": "online", 00:39:25.005 "raid_level": "raid1", 00:39:25.005 "superblock": true, 00:39:25.005 "num_base_bdevs": 4, 00:39:25.005 "num_base_bdevs_discovered": 2, 00:39:25.005 "num_base_bdevs_operational": 2, 00:39:25.005 "base_bdevs_list": [ 00:39:25.005 { 00:39:25.005 "name": null, 00:39:25.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.005 "is_configured": false, 00:39:25.005 "data_offset": 0, 00:39:25.005 "data_size": 63488 00:39:25.005 }, 00:39:25.005 { 00:39:25.005 "name": null, 00:39:25.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.005 "is_configured": false, 00:39:25.005 "data_offset": 2048, 00:39:25.005 "data_size": 63488 00:39:25.005 }, 00:39:25.005 { 00:39:25.005 "name": "BaseBdev3", 00:39:25.005 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:25.005 "is_configured": true, 00:39:25.005 "data_offset": 2048, 00:39:25.005 "data_size": 63488 00:39:25.005 }, 00:39:25.005 { 00:39:25.005 "name": "BaseBdev4", 00:39:25.005 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:25.005 "is_configured": true, 00:39:25.005 "data_offset": 2048, 00:39:25.005 "data_size": 63488 00:39:25.005 } 00:39:25.005 ] 00:39:25.005 }' 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:25.005 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.263 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:25.263 "name": "raid_bdev1", 00:39:25.264 "uuid": "2e0d3968-3c7b-4f91-8392-cbe7268244c6", 00:39:25.264 "strip_size_kb": 0, 00:39:25.264 "state": "online", 00:39:25.264 "raid_level": "raid1", 00:39:25.264 "superblock": true, 00:39:25.264 "num_base_bdevs": 4, 00:39:25.264 "num_base_bdevs_discovered": 2, 00:39:25.264 "num_base_bdevs_operational": 2, 00:39:25.264 "base_bdevs_list": [ 00:39:25.264 { 00:39:25.264 "name": null, 00:39:25.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.264 "is_configured": false, 00:39:25.264 "data_offset": 0, 00:39:25.264 "data_size": 63488 00:39:25.264 }, 00:39:25.264 { 00:39:25.264 "name": null, 00:39:25.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.264 "is_configured": false, 00:39:25.264 "data_offset": 2048, 00:39:25.264 "data_size": 63488 00:39:25.264 }, 00:39:25.264 { 00:39:25.264 "name": "BaseBdev3", 00:39:25.264 "uuid": "37a2e591-47e0-5516-899e-a5ca4bafce16", 00:39:25.264 "is_configured": true, 00:39:25.264 "data_offset": 2048, 00:39:25.264 "data_size": 63488 00:39:25.264 }, 00:39:25.264 { 00:39:25.264 "name": "BaseBdev4", 00:39:25.264 "uuid": "aa2b4c44-c146-5142-b6a2-ff4733fe9317", 00:39:25.264 "is_configured": true, 00:39:25.264 "data_offset": 2048, 00:39:25.264 "data_size": 63488 00:39:25.264 } 00:39:25.264 ] 00:39:25.264 }' 00:39:25.264 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79050 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79050 ']' 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79050 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79050 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:25.522 killing process with pid 79050 00:39:25.522 Received shutdown signal, test time was about 17.977855 seconds 00:39:25.522 00:39:25.522 Latency(us) 00:39:25.522 [2024-12-09T23:21:06.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:25.522 [2024-12-09T23:21:06.158Z] =================================================================================================================== 00:39:25.522 [2024-12-09T23:21:06.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79050' 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79050 00:39:25.522 [2024-12-09 23:21:05.997818] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:25.522 23:21:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79050 00:39:25.522 [2024-12-09 23:21:05.997944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:25.522 [2024-12-09 23:21:05.998017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:25.522 [2024-12-09 23:21:05.998029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:39:26.092 [2024-12-09 23:21:06.426425] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:27.031 23:21:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:39:27.031 00:39:27.031 real 0m21.510s 00:39:27.031 user 0m27.940s 00:39:27.031 sys 0m2.999s 00:39:27.031 23:21:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:27.031 23:21:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:27.031 ************************************ 00:39:27.031 END TEST raid_rebuild_test_sb_io 00:39:27.031 ************************************ 00:39:27.293 23:21:07 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:39:27.293 23:21:07 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:39:27.293 23:21:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:27.293 23:21:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:27.293 23:21:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 ************************************ 00:39:27.293 START TEST raid5f_state_function_test 00:39:27.293 ************************************ 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:39:27.293 Process raid pid: 79774 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79774 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79774' 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79774 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79774 ']' 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:27.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:27.293 23:21:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 [2024-12-09 23:21:07.815091] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:27.293 [2024-12-09 23:21:07.815226] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:27.552 [2024-12-09 23:21:08.007821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:27.552 [2024-12-09 23:21:08.124609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.811 [2024-12-09 23:21:08.341467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:27.811 [2024-12-09 23:21:08.341517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.382 [2024-12-09 23:21:08.725891] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:28.382 [2024-12-09 23:21:08.725960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:28.382 [2024-12-09 23:21:08.725972] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:28.382 [2024-12-09 23:21:08.725985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:28.382 [2024-12-09 23:21:08.725993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:28.382 [2024-12-09 23:21:08.726005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:28.382 "name": "Existed_Raid", 00:39:28.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.382 "strip_size_kb": 64, 00:39:28.382 "state": "configuring", 00:39:28.382 "raid_level": "raid5f", 00:39:28.382 "superblock": false, 00:39:28.382 "num_base_bdevs": 3, 00:39:28.382 "num_base_bdevs_discovered": 0, 00:39:28.382 "num_base_bdevs_operational": 3, 00:39:28.382 "base_bdevs_list": [ 00:39:28.382 { 00:39:28.382 "name": "BaseBdev1", 00:39:28.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.382 "is_configured": false, 00:39:28.382 "data_offset": 0, 00:39:28.382 "data_size": 0 00:39:28.382 }, 00:39:28.382 { 00:39:28.382 "name": "BaseBdev2", 00:39:28.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.382 "is_configured": false, 00:39:28.382 "data_offset": 0, 00:39:28.382 "data_size": 0 00:39:28.382 }, 00:39:28.382 { 00:39:28.382 "name": "BaseBdev3", 00:39:28.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.382 "is_configured": false, 00:39:28.382 "data_offset": 0, 00:39:28.382 "data_size": 0 00:39:28.382 } 00:39:28.382 ] 00:39:28.382 }' 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:28.382 23:21:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.642 [2024-12-09 23:21:09.173227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:28.642 [2024-12-09 23:21:09.173272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.642 [2024-12-09 23:21:09.185225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:28.642 [2024-12-09 23:21:09.185301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:28.642 [2024-12-09 23:21:09.185312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:28.642 [2024-12-09 23:21:09.185325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:28.642 [2024-12-09 23:21:09.185334] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:28.642 [2024-12-09 23:21:09.185346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.642 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.642 [2024-12-09 23:21:09.236692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:28.643 BaseBdev1 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.643 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.643 [ 00:39:28.643 { 00:39:28.643 "name": "BaseBdev1", 00:39:28.643 "aliases": [ 00:39:28.643 "239c0b09-e9ed-44ce-8ea3-1dea6c4b31a6" 00:39:28.643 ], 00:39:28.643 "product_name": "Malloc disk", 00:39:28.643 "block_size": 512, 00:39:28.643 "num_blocks": 65536, 00:39:28.643 "uuid": "239c0b09-e9ed-44ce-8ea3-1dea6c4b31a6", 00:39:28.643 "assigned_rate_limits": { 00:39:28.643 "rw_ios_per_sec": 0, 00:39:28.643 "rw_mbytes_per_sec": 0, 00:39:28.643 "r_mbytes_per_sec": 0, 00:39:28.643 "w_mbytes_per_sec": 0 00:39:28.643 }, 00:39:28.643 "claimed": true, 00:39:28.643 "claim_type": "exclusive_write", 00:39:28.643 "zoned": false, 00:39:28.643 "supported_io_types": { 00:39:28.643 "read": true, 00:39:28.643 "write": true, 00:39:28.643 "unmap": true, 00:39:28.957 "flush": true, 00:39:28.957 "reset": true, 00:39:28.957 "nvme_admin": false, 00:39:28.957 "nvme_io": false, 00:39:28.957 "nvme_io_md": false, 00:39:28.957 "write_zeroes": true, 00:39:28.957 "zcopy": true, 00:39:28.957 "get_zone_info": false, 00:39:28.957 "zone_management": false, 00:39:28.957 "zone_append": false, 00:39:28.957 "compare": false, 00:39:28.957 "compare_and_write": false, 00:39:28.957 "abort": true, 00:39:28.957 "seek_hole": false, 00:39:28.957 "seek_data": false, 00:39:28.957 "copy": true, 00:39:28.957 "nvme_iov_md": false 00:39:28.957 }, 00:39:28.957 "memory_domains": [ 00:39:28.957 { 00:39:28.957 "dma_device_id": "system", 00:39:28.957 "dma_device_type": 1 00:39:28.957 }, 00:39:28.957 { 00:39:28.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:28.957 "dma_device_type": 2 00:39:28.957 } 00:39:28.957 ], 00:39:28.957 "driver_specific": {} 00:39:28.957 } 00:39:28.957 ] 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:28.957 "name": "Existed_Raid", 00:39:28.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.957 "strip_size_kb": 64, 00:39:28.957 "state": "configuring", 00:39:28.957 "raid_level": "raid5f", 00:39:28.957 "superblock": false, 00:39:28.957 "num_base_bdevs": 3, 00:39:28.957 "num_base_bdevs_discovered": 1, 00:39:28.957 "num_base_bdevs_operational": 3, 00:39:28.957 "base_bdevs_list": [ 00:39:28.957 { 00:39:28.957 "name": "BaseBdev1", 00:39:28.957 "uuid": "239c0b09-e9ed-44ce-8ea3-1dea6c4b31a6", 00:39:28.957 "is_configured": true, 00:39:28.957 "data_offset": 0, 00:39:28.957 "data_size": 65536 00:39:28.957 }, 00:39:28.957 { 00:39:28.957 "name": "BaseBdev2", 00:39:28.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.957 "is_configured": false, 00:39:28.957 "data_offset": 0, 00:39:28.957 "data_size": 0 00:39:28.957 }, 00:39:28.957 { 00:39:28.957 "name": "BaseBdev3", 00:39:28.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.957 "is_configured": false, 00:39:28.957 "data_offset": 0, 00:39:28.957 "data_size": 0 00:39:28.957 } 00:39:28.957 ] 00:39:28.957 }' 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:28.957 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.217 [2024-12-09 23:21:09.692106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:29.217 [2024-12-09 23:21:09.692164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.217 [2024-12-09 23:21:09.700143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:29.217 [2024-12-09 23:21:09.702346] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:29.217 [2024-12-09 23:21:09.702422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:29.217 [2024-12-09 23:21:09.702436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:29.217 [2024-12-09 23:21:09.702449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:29.217 "name": "Existed_Raid", 00:39:29.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.217 "strip_size_kb": 64, 00:39:29.217 "state": "configuring", 00:39:29.217 "raid_level": "raid5f", 00:39:29.217 "superblock": false, 00:39:29.217 "num_base_bdevs": 3, 00:39:29.217 "num_base_bdevs_discovered": 1, 00:39:29.217 "num_base_bdevs_operational": 3, 00:39:29.217 "base_bdevs_list": [ 00:39:29.217 { 00:39:29.217 "name": "BaseBdev1", 00:39:29.217 "uuid": "239c0b09-e9ed-44ce-8ea3-1dea6c4b31a6", 00:39:29.217 "is_configured": true, 00:39:29.217 "data_offset": 0, 00:39:29.217 "data_size": 65536 00:39:29.217 }, 00:39:29.217 { 00:39:29.217 "name": "BaseBdev2", 00:39:29.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.217 "is_configured": false, 00:39:29.217 "data_offset": 0, 00:39:29.217 "data_size": 0 00:39:29.217 }, 00:39:29.217 { 00:39:29.217 "name": "BaseBdev3", 00:39:29.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.217 "is_configured": false, 00:39:29.217 "data_offset": 0, 00:39:29.217 "data_size": 0 00:39:29.217 } 00:39:29.217 ] 00:39:29.217 }' 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:29.217 23:21:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.785 [2024-12-09 23:21:10.151445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:29.785 BaseBdev2 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.785 [ 00:39:29.785 { 00:39:29.785 "name": "BaseBdev2", 00:39:29.785 "aliases": [ 00:39:29.785 "004d4ca4-0ede-4407-a050-061b4da8ae68" 00:39:29.785 ], 00:39:29.785 "product_name": "Malloc disk", 00:39:29.785 "block_size": 512, 00:39:29.785 "num_blocks": 65536, 00:39:29.785 "uuid": "004d4ca4-0ede-4407-a050-061b4da8ae68", 00:39:29.785 "assigned_rate_limits": { 00:39:29.785 "rw_ios_per_sec": 0, 00:39:29.785 "rw_mbytes_per_sec": 0, 00:39:29.785 "r_mbytes_per_sec": 0, 00:39:29.785 "w_mbytes_per_sec": 0 00:39:29.785 }, 00:39:29.785 "claimed": true, 00:39:29.785 "claim_type": "exclusive_write", 00:39:29.785 "zoned": false, 00:39:29.785 "supported_io_types": { 00:39:29.785 "read": true, 00:39:29.785 "write": true, 00:39:29.785 "unmap": true, 00:39:29.785 "flush": true, 00:39:29.785 "reset": true, 00:39:29.785 "nvme_admin": false, 00:39:29.785 "nvme_io": false, 00:39:29.785 "nvme_io_md": false, 00:39:29.785 "write_zeroes": true, 00:39:29.785 "zcopy": true, 00:39:29.785 "get_zone_info": false, 00:39:29.785 "zone_management": false, 00:39:29.785 "zone_append": false, 00:39:29.785 "compare": false, 00:39:29.785 "compare_and_write": false, 00:39:29.785 "abort": true, 00:39:29.785 "seek_hole": false, 00:39:29.785 "seek_data": false, 00:39:29.785 "copy": true, 00:39:29.785 "nvme_iov_md": false 00:39:29.785 }, 00:39:29.785 "memory_domains": [ 00:39:29.785 { 00:39:29.785 "dma_device_id": "system", 00:39:29.785 "dma_device_type": 1 00:39:29.785 }, 00:39:29.785 { 00:39:29.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:29.785 "dma_device_type": 2 00:39:29.785 } 00:39:29.785 ], 00:39:29.785 "driver_specific": {} 00:39:29.785 } 00:39:29.785 ] 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:29.785 "name": "Existed_Raid", 00:39:29.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.785 "strip_size_kb": 64, 00:39:29.785 "state": "configuring", 00:39:29.785 "raid_level": "raid5f", 00:39:29.785 "superblock": false, 00:39:29.785 "num_base_bdevs": 3, 00:39:29.785 "num_base_bdevs_discovered": 2, 00:39:29.785 "num_base_bdevs_operational": 3, 00:39:29.785 "base_bdevs_list": [ 00:39:29.785 { 00:39:29.785 "name": "BaseBdev1", 00:39:29.785 "uuid": "239c0b09-e9ed-44ce-8ea3-1dea6c4b31a6", 00:39:29.785 "is_configured": true, 00:39:29.785 "data_offset": 0, 00:39:29.785 "data_size": 65536 00:39:29.785 }, 00:39:29.785 { 00:39:29.785 "name": "BaseBdev2", 00:39:29.785 "uuid": "004d4ca4-0ede-4407-a050-061b4da8ae68", 00:39:29.785 "is_configured": true, 00:39:29.785 "data_offset": 0, 00:39:29.785 "data_size": 65536 00:39:29.785 }, 00:39:29.785 { 00:39:29.785 "name": "BaseBdev3", 00:39:29.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.785 "is_configured": false, 00:39:29.785 "data_offset": 0, 00:39:29.785 "data_size": 0 00:39:29.785 } 00:39:29.785 ] 00:39:29.785 }' 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:29.785 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.043 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:39:30.043 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.043 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.301 [2024-12-09 23:21:10.689005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:30.301 [2024-12-09 23:21:10.689100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:39:30.301 [2024-12-09 23:21:10.689122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:39:30.301 [2024-12-09 23:21:10.689438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:30.301 [2024-12-09 23:21:10.695208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:39:30.301 [2024-12-09 23:21:10.695237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:39:30.301 [2024-12-09 23:21:10.695531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:30.301 BaseBdev3 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.301 [ 00:39:30.301 { 00:39:30.301 "name": "BaseBdev3", 00:39:30.301 "aliases": [ 00:39:30.301 "65826b21-5f5a-4c32-8f1d-82f7207e94c2" 00:39:30.301 ], 00:39:30.301 "product_name": "Malloc disk", 00:39:30.301 "block_size": 512, 00:39:30.301 "num_blocks": 65536, 00:39:30.301 "uuid": "65826b21-5f5a-4c32-8f1d-82f7207e94c2", 00:39:30.301 "assigned_rate_limits": { 00:39:30.301 "rw_ios_per_sec": 0, 00:39:30.301 "rw_mbytes_per_sec": 0, 00:39:30.301 "r_mbytes_per_sec": 0, 00:39:30.301 "w_mbytes_per_sec": 0 00:39:30.301 }, 00:39:30.301 "claimed": true, 00:39:30.301 "claim_type": "exclusive_write", 00:39:30.301 "zoned": false, 00:39:30.301 "supported_io_types": { 00:39:30.301 "read": true, 00:39:30.301 "write": true, 00:39:30.301 "unmap": true, 00:39:30.301 "flush": true, 00:39:30.301 "reset": true, 00:39:30.301 "nvme_admin": false, 00:39:30.301 "nvme_io": false, 00:39:30.301 "nvme_io_md": false, 00:39:30.301 "write_zeroes": true, 00:39:30.301 "zcopy": true, 00:39:30.301 "get_zone_info": false, 00:39:30.301 "zone_management": false, 00:39:30.301 "zone_append": false, 00:39:30.301 "compare": false, 00:39:30.301 "compare_and_write": false, 00:39:30.301 "abort": true, 00:39:30.301 "seek_hole": false, 00:39:30.301 "seek_data": false, 00:39:30.301 "copy": true, 00:39:30.301 "nvme_iov_md": false 00:39:30.301 }, 00:39:30.301 "memory_domains": [ 00:39:30.301 { 00:39:30.301 "dma_device_id": "system", 00:39:30.301 "dma_device_type": 1 00:39:30.301 }, 00:39:30.301 { 00:39:30.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:30.301 "dma_device_type": 2 00:39:30.301 } 00:39:30.301 ], 00:39:30.301 "driver_specific": {} 00:39:30.301 } 00:39:30.301 ] 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:30.301 "name": "Existed_Raid", 00:39:30.301 "uuid": "3f287bd3-b1f4-4ae6-954d-ad37cc57b21a", 00:39:30.301 "strip_size_kb": 64, 00:39:30.301 "state": "online", 00:39:30.301 "raid_level": "raid5f", 00:39:30.301 "superblock": false, 00:39:30.301 "num_base_bdevs": 3, 00:39:30.301 "num_base_bdevs_discovered": 3, 00:39:30.301 "num_base_bdevs_operational": 3, 00:39:30.301 "base_bdevs_list": [ 00:39:30.301 { 00:39:30.301 "name": "BaseBdev1", 00:39:30.301 "uuid": "239c0b09-e9ed-44ce-8ea3-1dea6c4b31a6", 00:39:30.301 "is_configured": true, 00:39:30.301 "data_offset": 0, 00:39:30.301 "data_size": 65536 00:39:30.301 }, 00:39:30.301 { 00:39:30.301 "name": "BaseBdev2", 00:39:30.301 "uuid": "004d4ca4-0ede-4407-a050-061b4da8ae68", 00:39:30.301 "is_configured": true, 00:39:30.301 "data_offset": 0, 00:39:30.301 "data_size": 65536 00:39:30.301 }, 00:39:30.301 { 00:39:30.301 "name": "BaseBdev3", 00:39:30.301 "uuid": "65826b21-5f5a-4c32-8f1d-82f7207e94c2", 00:39:30.301 "is_configured": true, 00:39:30.301 "data_offset": 0, 00:39:30.301 "data_size": 65536 00:39:30.301 } 00:39:30.301 ] 00:39:30.301 }' 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:30.301 23:21:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:30.558 [2024-12-09 23:21:11.161737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:30.558 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:30.818 "name": "Existed_Raid", 00:39:30.818 "aliases": [ 00:39:30.818 "3f287bd3-b1f4-4ae6-954d-ad37cc57b21a" 00:39:30.818 ], 00:39:30.818 "product_name": "Raid Volume", 00:39:30.818 "block_size": 512, 00:39:30.818 "num_blocks": 131072, 00:39:30.818 "uuid": "3f287bd3-b1f4-4ae6-954d-ad37cc57b21a", 00:39:30.818 "assigned_rate_limits": { 00:39:30.818 "rw_ios_per_sec": 0, 00:39:30.818 "rw_mbytes_per_sec": 0, 00:39:30.818 "r_mbytes_per_sec": 0, 00:39:30.818 "w_mbytes_per_sec": 0 00:39:30.818 }, 00:39:30.818 "claimed": false, 00:39:30.818 "zoned": false, 00:39:30.818 "supported_io_types": { 00:39:30.818 "read": true, 00:39:30.818 "write": true, 00:39:30.818 "unmap": false, 00:39:30.818 "flush": false, 00:39:30.818 "reset": true, 00:39:30.818 "nvme_admin": false, 00:39:30.818 "nvme_io": false, 00:39:30.818 "nvme_io_md": false, 00:39:30.818 "write_zeroes": true, 00:39:30.818 "zcopy": false, 00:39:30.818 "get_zone_info": false, 00:39:30.818 "zone_management": false, 00:39:30.818 "zone_append": false, 00:39:30.818 "compare": false, 00:39:30.818 "compare_and_write": false, 00:39:30.818 "abort": false, 00:39:30.818 "seek_hole": false, 00:39:30.818 "seek_data": false, 00:39:30.818 "copy": false, 00:39:30.818 "nvme_iov_md": false 00:39:30.818 }, 00:39:30.818 "driver_specific": { 00:39:30.818 "raid": { 00:39:30.818 "uuid": "3f287bd3-b1f4-4ae6-954d-ad37cc57b21a", 00:39:30.818 "strip_size_kb": 64, 00:39:30.818 "state": "online", 00:39:30.818 "raid_level": "raid5f", 00:39:30.818 "superblock": false, 00:39:30.818 "num_base_bdevs": 3, 00:39:30.818 "num_base_bdevs_discovered": 3, 00:39:30.818 "num_base_bdevs_operational": 3, 00:39:30.818 "base_bdevs_list": [ 00:39:30.818 { 00:39:30.818 "name": "BaseBdev1", 00:39:30.818 "uuid": "239c0b09-e9ed-44ce-8ea3-1dea6c4b31a6", 00:39:30.818 "is_configured": true, 00:39:30.818 "data_offset": 0, 00:39:30.818 "data_size": 65536 00:39:30.818 }, 00:39:30.818 { 00:39:30.818 "name": "BaseBdev2", 00:39:30.818 "uuid": "004d4ca4-0ede-4407-a050-061b4da8ae68", 00:39:30.818 "is_configured": true, 00:39:30.818 "data_offset": 0, 00:39:30.818 "data_size": 65536 00:39:30.818 }, 00:39:30.818 { 00:39:30.818 "name": "BaseBdev3", 00:39:30.818 "uuid": "65826b21-5f5a-4c32-8f1d-82f7207e94c2", 00:39:30.818 "is_configured": true, 00:39:30.818 "data_offset": 0, 00:39:30.818 "data_size": 65536 00:39:30.818 } 00:39:30.818 ] 00:39:30.818 } 00:39:30.818 } 00:39:30.818 }' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:39:30.818 BaseBdev2 00:39:30.818 BaseBdev3' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.818 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:30.818 [2024-12-09 23:21:11.413357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:31.077 "name": "Existed_Raid", 00:39:31.077 "uuid": "3f287bd3-b1f4-4ae6-954d-ad37cc57b21a", 00:39:31.077 "strip_size_kb": 64, 00:39:31.077 "state": "online", 00:39:31.077 "raid_level": "raid5f", 00:39:31.077 "superblock": false, 00:39:31.077 "num_base_bdevs": 3, 00:39:31.077 "num_base_bdevs_discovered": 2, 00:39:31.077 "num_base_bdevs_operational": 2, 00:39:31.077 "base_bdevs_list": [ 00:39:31.077 { 00:39:31.077 "name": null, 00:39:31.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:31.077 "is_configured": false, 00:39:31.077 "data_offset": 0, 00:39:31.077 "data_size": 65536 00:39:31.077 }, 00:39:31.077 { 00:39:31.077 "name": "BaseBdev2", 00:39:31.077 "uuid": "004d4ca4-0ede-4407-a050-061b4da8ae68", 00:39:31.077 "is_configured": true, 00:39:31.077 "data_offset": 0, 00:39:31.077 "data_size": 65536 00:39:31.077 }, 00:39:31.077 { 00:39:31.077 "name": "BaseBdev3", 00:39:31.077 "uuid": "65826b21-5f5a-4c32-8f1d-82f7207e94c2", 00:39:31.077 "is_configured": true, 00:39:31.077 "data_offset": 0, 00:39:31.077 "data_size": 65536 00:39:31.077 } 00:39:31.077 ] 00:39:31.077 }' 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:31.077 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.336 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:39:31.336 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:31.336 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:31.336 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.336 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.336 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:31.336 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.594 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:31.594 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:31.594 23:21:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:39:31.594 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.594 23:21:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.594 [2024-12-09 23:21:12.004836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:31.594 [2024-12-09 23:21:12.004949] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:31.594 [2024-12-09 23:21:12.101070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.594 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.594 [2024-12-09 23:21:12.157035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:31.594 [2024-12-09 23:21:12.157095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.854 BaseBdev2 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.854 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.854 [ 00:39:31.854 { 00:39:31.854 "name": "BaseBdev2", 00:39:31.854 "aliases": [ 00:39:31.854 "97f15aaf-7998-4976-941b-925ec2c0f61c" 00:39:31.854 ], 00:39:31.854 "product_name": "Malloc disk", 00:39:31.854 "block_size": 512, 00:39:31.854 "num_blocks": 65536, 00:39:31.854 "uuid": "97f15aaf-7998-4976-941b-925ec2c0f61c", 00:39:31.854 "assigned_rate_limits": { 00:39:31.854 "rw_ios_per_sec": 0, 00:39:31.854 "rw_mbytes_per_sec": 0, 00:39:31.854 "r_mbytes_per_sec": 0, 00:39:31.854 "w_mbytes_per_sec": 0 00:39:31.854 }, 00:39:31.854 "claimed": false, 00:39:31.854 "zoned": false, 00:39:31.854 "supported_io_types": { 00:39:31.854 "read": true, 00:39:31.854 "write": true, 00:39:31.854 "unmap": true, 00:39:31.854 "flush": true, 00:39:31.854 "reset": true, 00:39:31.854 "nvme_admin": false, 00:39:31.854 "nvme_io": false, 00:39:31.854 "nvme_io_md": false, 00:39:31.854 "write_zeroes": true, 00:39:31.854 "zcopy": true, 00:39:31.854 "get_zone_info": false, 00:39:31.854 "zone_management": false, 00:39:31.854 "zone_append": false, 00:39:31.854 "compare": false, 00:39:31.854 "compare_and_write": false, 00:39:31.854 "abort": true, 00:39:31.854 "seek_hole": false, 00:39:31.854 "seek_data": false, 00:39:31.854 "copy": true, 00:39:31.854 "nvme_iov_md": false 00:39:31.854 }, 00:39:31.854 "memory_domains": [ 00:39:31.854 { 00:39:31.854 "dma_device_id": "system", 00:39:31.854 "dma_device_type": 1 00:39:31.854 }, 00:39:31.854 { 00:39:31.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:31.854 "dma_device_type": 2 00:39:31.855 } 00:39:31.855 ], 00:39:31.855 "driver_specific": {} 00:39:31.855 } 00:39:31.855 ] 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.855 BaseBdev3 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.855 [ 00:39:31.855 { 00:39:31.855 "name": "BaseBdev3", 00:39:31.855 "aliases": [ 00:39:31.855 "eb70a903-3f4c-4892-9b16-5d487fed80ec" 00:39:31.855 ], 00:39:31.855 "product_name": "Malloc disk", 00:39:31.855 "block_size": 512, 00:39:31.855 "num_blocks": 65536, 00:39:31.855 "uuid": "eb70a903-3f4c-4892-9b16-5d487fed80ec", 00:39:31.855 "assigned_rate_limits": { 00:39:31.855 "rw_ios_per_sec": 0, 00:39:31.855 "rw_mbytes_per_sec": 0, 00:39:31.855 "r_mbytes_per_sec": 0, 00:39:31.855 "w_mbytes_per_sec": 0 00:39:31.855 }, 00:39:31.855 "claimed": false, 00:39:31.855 "zoned": false, 00:39:31.855 "supported_io_types": { 00:39:31.855 "read": true, 00:39:31.855 "write": true, 00:39:31.855 "unmap": true, 00:39:31.855 "flush": true, 00:39:31.855 "reset": true, 00:39:31.855 "nvme_admin": false, 00:39:31.855 "nvme_io": false, 00:39:31.855 "nvme_io_md": false, 00:39:31.855 "write_zeroes": true, 00:39:31.855 "zcopy": true, 00:39:31.855 "get_zone_info": false, 00:39:31.855 "zone_management": false, 00:39:31.855 "zone_append": false, 00:39:31.855 "compare": false, 00:39:31.855 "compare_and_write": false, 00:39:31.855 "abort": true, 00:39:31.855 "seek_hole": false, 00:39:31.855 "seek_data": false, 00:39:31.855 "copy": true, 00:39:31.855 "nvme_iov_md": false 00:39:31.855 }, 00:39:31.855 "memory_domains": [ 00:39:31.855 { 00:39:31.855 "dma_device_id": "system", 00:39:31.855 "dma_device_type": 1 00:39:31.855 }, 00:39:31.855 { 00:39:31.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:31.855 "dma_device_type": 2 00:39:31.855 } 00:39:31.855 ], 00:39:31.855 "driver_specific": {} 00:39:31.855 } 00:39:31.855 ] 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:31.855 [2024-12-09 23:21:12.476785] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:31.855 [2024-12-09 23:21:12.476845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:31.855 [2024-12-09 23:21:12.476874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:31.855 [2024-12-09 23:21:12.479025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:31.855 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:32.115 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:32.115 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.115 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.115 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.115 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:32.115 "name": "Existed_Raid", 00:39:32.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:32.115 "strip_size_kb": 64, 00:39:32.115 "state": "configuring", 00:39:32.115 "raid_level": "raid5f", 00:39:32.115 "superblock": false, 00:39:32.115 "num_base_bdevs": 3, 00:39:32.115 "num_base_bdevs_discovered": 2, 00:39:32.115 "num_base_bdevs_operational": 3, 00:39:32.115 "base_bdevs_list": [ 00:39:32.115 { 00:39:32.115 "name": "BaseBdev1", 00:39:32.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:32.115 "is_configured": false, 00:39:32.115 "data_offset": 0, 00:39:32.115 "data_size": 0 00:39:32.115 }, 00:39:32.115 { 00:39:32.115 "name": "BaseBdev2", 00:39:32.115 "uuid": "97f15aaf-7998-4976-941b-925ec2c0f61c", 00:39:32.115 "is_configured": true, 00:39:32.115 "data_offset": 0, 00:39:32.115 "data_size": 65536 00:39:32.115 }, 00:39:32.115 { 00:39:32.115 "name": "BaseBdev3", 00:39:32.115 "uuid": "eb70a903-3f4c-4892-9b16-5d487fed80ec", 00:39:32.115 "is_configured": true, 00:39:32.115 "data_offset": 0, 00:39:32.115 "data_size": 65536 00:39:32.115 } 00:39:32.115 ] 00:39:32.115 }' 00:39:32.115 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:32.115 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.373 [2024-12-09 23:21:12.932145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:32.373 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:32.373 "name": "Existed_Raid", 00:39:32.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:32.373 "strip_size_kb": 64, 00:39:32.373 "state": "configuring", 00:39:32.373 "raid_level": "raid5f", 00:39:32.373 "superblock": false, 00:39:32.373 "num_base_bdevs": 3, 00:39:32.373 "num_base_bdevs_discovered": 1, 00:39:32.373 "num_base_bdevs_operational": 3, 00:39:32.373 "base_bdevs_list": [ 00:39:32.373 { 00:39:32.373 "name": "BaseBdev1", 00:39:32.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:32.373 "is_configured": false, 00:39:32.374 "data_offset": 0, 00:39:32.374 "data_size": 0 00:39:32.374 }, 00:39:32.374 { 00:39:32.374 "name": null, 00:39:32.374 "uuid": "97f15aaf-7998-4976-941b-925ec2c0f61c", 00:39:32.374 "is_configured": false, 00:39:32.374 "data_offset": 0, 00:39:32.374 "data_size": 65536 00:39:32.374 }, 00:39:32.374 { 00:39:32.374 "name": "BaseBdev3", 00:39:32.374 "uuid": "eb70a903-3f4c-4892-9b16-5d487fed80ec", 00:39:32.374 "is_configured": true, 00:39:32.374 "data_offset": 0, 00:39:32.374 "data_size": 65536 00:39:32.374 } 00:39:32.374 ] 00:39:32.374 }' 00:39:32.374 23:21:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:32.374 23:21:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.015 [2024-12-09 23:21:13.474076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:33.015 BaseBdev1 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.015 [ 00:39:33.015 { 00:39:33.015 "name": "BaseBdev1", 00:39:33.015 "aliases": [ 00:39:33.015 "a62d5496-58da-4d78-9201-3dc50bb4e6cb" 00:39:33.015 ], 00:39:33.015 "product_name": "Malloc disk", 00:39:33.015 "block_size": 512, 00:39:33.015 "num_blocks": 65536, 00:39:33.015 "uuid": "a62d5496-58da-4d78-9201-3dc50bb4e6cb", 00:39:33.015 "assigned_rate_limits": { 00:39:33.015 "rw_ios_per_sec": 0, 00:39:33.015 "rw_mbytes_per_sec": 0, 00:39:33.015 "r_mbytes_per_sec": 0, 00:39:33.015 "w_mbytes_per_sec": 0 00:39:33.015 }, 00:39:33.015 "claimed": true, 00:39:33.015 "claim_type": "exclusive_write", 00:39:33.015 "zoned": false, 00:39:33.015 "supported_io_types": { 00:39:33.015 "read": true, 00:39:33.015 "write": true, 00:39:33.015 "unmap": true, 00:39:33.015 "flush": true, 00:39:33.015 "reset": true, 00:39:33.015 "nvme_admin": false, 00:39:33.015 "nvme_io": false, 00:39:33.015 "nvme_io_md": false, 00:39:33.015 "write_zeroes": true, 00:39:33.015 "zcopy": true, 00:39:33.015 "get_zone_info": false, 00:39:33.015 "zone_management": false, 00:39:33.015 "zone_append": false, 00:39:33.015 "compare": false, 00:39:33.015 "compare_and_write": false, 00:39:33.015 "abort": true, 00:39:33.015 "seek_hole": false, 00:39:33.015 "seek_data": false, 00:39:33.015 "copy": true, 00:39:33.015 "nvme_iov_md": false 00:39:33.015 }, 00:39:33.015 "memory_domains": [ 00:39:33.015 { 00:39:33.015 "dma_device_id": "system", 00:39:33.015 "dma_device_type": 1 00:39:33.015 }, 00:39:33.015 { 00:39:33.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:33.015 "dma_device_type": 2 00:39:33.015 } 00:39:33.015 ], 00:39:33.015 "driver_specific": {} 00:39:33.015 } 00:39:33.015 ] 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:33.015 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:33.016 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:33.016 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:33.016 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.016 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.016 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.016 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:33.016 "name": "Existed_Raid", 00:39:33.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:33.016 "strip_size_kb": 64, 00:39:33.016 "state": "configuring", 00:39:33.016 "raid_level": "raid5f", 00:39:33.016 "superblock": false, 00:39:33.016 "num_base_bdevs": 3, 00:39:33.016 "num_base_bdevs_discovered": 2, 00:39:33.016 "num_base_bdevs_operational": 3, 00:39:33.016 "base_bdevs_list": [ 00:39:33.016 { 00:39:33.016 "name": "BaseBdev1", 00:39:33.016 "uuid": "a62d5496-58da-4d78-9201-3dc50bb4e6cb", 00:39:33.016 "is_configured": true, 00:39:33.016 "data_offset": 0, 00:39:33.016 "data_size": 65536 00:39:33.016 }, 00:39:33.016 { 00:39:33.016 "name": null, 00:39:33.016 "uuid": "97f15aaf-7998-4976-941b-925ec2c0f61c", 00:39:33.016 "is_configured": false, 00:39:33.016 "data_offset": 0, 00:39:33.016 "data_size": 65536 00:39:33.016 }, 00:39:33.016 { 00:39:33.016 "name": "BaseBdev3", 00:39:33.016 "uuid": "eb70a903-3f4c-4892-9b16-5d487fed80ec", 00:39:33.016 "is_configured": true, 00:39:33.016 "data_offset": 0, 00:39:33.016 "data_size": 65536 00:39:33.016 } 00:39:33.016 ] 00:39:33.016 }' 00:39:33.016 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:33.016 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.585 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:33.585 23:21:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:39:33.585 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.585 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.585 23:21:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.585 [2024-12-09 23:21:14.025544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:33.585 "name": "Existed_Raid", 00:39:33.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:33.585 "strip_size_kb": 64, 00:39:33.585 "state": "configuring", 00:39:33.585 "raid_level": "raid5f", 00:39:33.585 "superblock": false, 00:39:33.585 "num_base_bdevs": 3, 00:39:33.585 "num_base_bdevs_discovered": 1, 00:39:33.585 "num_base_bdevs_operational": 3, 00:39:33.585 "base_bdevs_list": [ 00:39:33.585 { 00:39:33.585 "name": "BaseBdev1", 00:39:33.585 "uuid": "a62d5496-58da-4d78-9201-3dc50bb4e6cb", 00:39:33.585 "is_configured": true, 00:39:33.585 "data_offset": 0, 00:39:33.585 "data_size": 65536 00:39:33.585 }, 00:39:33.585 { 00:39:33.585 "name": null, 00:39:33.585 "uuid": "97f15aaf-7998-4976-941b-925ec2c0f61c", 00:39:33.585 "is_configured": false, 00:39:33.585 "data_offset": 0, 00:39:33.585 "data_size": 65536 00:39:33.585 }, 00:39:33.585 { 00:39:33.585 "name": null, 00:39:33.585 "uuid": "eb70a903-3f4c-4892-9b16-5d487fed80ec", 00:39:33.585 "is_configured": false, 00:39:33.585 "data_offset": 0, 00:39:33.585 "data_size": 65536 00:39:33.585 } 00:39:33.585 ] 00:39:33.585 }' 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:33.585 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:33.844 [2024-12-09 23:21:14.465022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.844 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.104 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.104 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:34.104 "name": "Existed_Raid", 00:39:34.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:34.104 "strip_size_kb": 64, 00:39:34.104 "state": "configuring", 00:39:34.104 "raid_level": "raid5f", 00:39:34.104 "superblock": false, 00:39:34.104 "num_base_bdevs": 3, 00:39:34.104 "num_base_bdevs_discovered": 2, 00:39:34.104 "num_base_bdevs_operational": 3, 00:39:34.104 "base_bdevs_list": [ 00:39:34.104 { 00:39:34.104 "name": "BaseBdev1", 00:39:34.104 "uuid": "a62d5496-58da-4d78-9201-3dc50bb4e6cb", 00:39:34.104 "is_configured": true, 00:39:34.104 "data_offset": 0, 00:39:34.104 "data_size": 65536 00:39:34.104 }, 00:39:34.104 { 00:39:34.104 "name": null, 00:39:34.104 "uuid": "97f15aaf-7998-4976-941b-925ec2c0f61c", 00:39:34.104 "is_configured": false, 00:39:34.104 "data_offset": 0, 00:39:34.104 "data_size": 65536 00:39:34.104 }, 00:39:34.104 { 00:39:34.104 "name": "BaseBdev3", 00:39:34.104 "uuid": "eb70a903-3f4c-4892-9b16-5d487fed80ec", 00:39:34.104 "is_configured": true, 00:39:34.104 "data_offset": 0, 00:39:34.104 "data_size": 65536 00:39:34.104 } 00:39:34.104 ] 00:39:34.104 }' 00:39:34.104 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:34.104 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.363 [2024-12-09 23:21:14.884548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.363 23:21:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.625 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.625 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:34.625 "name": "Existed_Raid", 00:39:34.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:34.625 "strip_size_kb": 64, 00:39:34.625 "state": "configuring", 00:39:34.625 "raid_level": "raid5f", 00:39:34.625 "superblock": false, 00:39:34.625 "num_base_bdevs": 3, 00:39:34.625 "num_base_bdevs_discovered": 1, 00:39:34.625 "num_base_bdevs_operational": 3, 00:39:34.625 "base_bdevs_list": [ 00:39:34.625 { 00:39:34.625 "name": null, 00:39:34.625 "uuid": "a62d5496-58da-4d78-9201-3dc50bb4e6cb", 00:39:34.625 "is_configured": false, 00:39:34.625 "data_offset": 0, 00:39:34.625 "data_size": 65536 00:39:34.625 }, 00:39:34.625 { 00:39:34.625 "name": null, 00:39:34.625 "uuid": "97f15aaf-7998-4976-941b-925ec2c0f61c", 00:39:34.625 "is_configured": false, 00:39:34.625 "data_offset": 0, 00:39:34.625 "data_size": 65536 00:39:34.625 }, 00:39:34.625 { 00:39:34.625 "name": "BaseBdev3", 00:39:34.625 "uuid": "eb70a903-3f4c-4892-9b16-5d487fed80ec", 00:39:34.625 "is_configured": true, 00:39:34.625 "data_offset": 0, 00:39:34.625 "data_size": 65536 00:39:34.625 } 00:39:34.625 ] 00:39:34.625 }' 00:39:34.625 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:34.625 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.888 [2024-12-09 23:21:15.432479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:34.888 "name": "Existed_Raid", 00:39:34.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:34.888 "strip_size_kb": 64, 00:39:34.888 "state": "configuring", 00:39:34.888 "raid_level": "raid5f", 00:39:34.888 "superblock": false, 00:39:34.888 "num_base_bdevs": 3, 00:39:34.888 "num_base_bdevs_discovered": 2, 00:39:34.888 "num_base_bdevs_operational": 3, 00:39:34.888 "base_bdevs_list": [ 00:39:34.888 { 00:39:34.888 "name": null, 00:39:34.888 "uuid": "a62d5496-58da-4d78-9201-3dc50bb4e6cb", 00:39:34.888 "is_configured": false, 00:39:34.888 "data_offset": 0, 00:39:34.888 "data_size": 65536 00:39:34.888 }, 00:39:34.888 { 00:39:34.888 "name": "BaseBdev2", 00:39:34.888 "uuid": "97f15aaf-7998-4976-941b-925ec2c0f61c", 00:39:34.888 "is_configured": true, 00:39:34.888 "data_offset": 0, 00:39:34.888 "data_size": 65536 00:39:34.888 }, 00:39:34.888 { 00:39:34.888 "name": "BaseBdev3", 00:39:34.888 "uuid": "eb70a903-3f4c-4892-9b16-5d487fed80ec", 00:39:34.888 "is_configured": true, 00:39:34.888 "data_offset": 0, 00:39:34.888 "data_size": 65536 00:39:34.888 } 00:39:34.888 ] 00:39:34.888 }' 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:34.888 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a62d5496-58da-4d78-9201-3dc50bb4e6cb 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.457 [2024-12-09 23:21:15.946595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:39:35.457 [2024-12-09 23:21:15.946646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:39:35.457 [2024-12-09 23:21:15.946659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:39:35.457 [2024-12-09 23:21:15.946922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:39:35.457 [2024-12-09 23:21:15.952416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:39:35.457 [2024-12-09 23:21:15.952442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:39:35.457 [2024-12-09 23:21:15.952712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:35.457 NewBaseBdev 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.457 [ 00:39:35.457 { 00:39:35.457 "name": "NewBaseBdev", 00:39:35.457 "aliases": [ 00:39:35.457 "a62d5496-58da-4d78-9201-3dc50bb4e6cb" 00:39:35.457 ], 00:39:35.457 "product_name": "Malloc disk", 00:39:35.457 "block_size": 512, 00:39:35.457 "num_blocks": 65536, 00:39:35.457 "uuid": "a62d5496-58da-4d78-9201-3dc50bb4e6cb", 00:39:35.457 "assigned_rate_limits": { 00:39:35.457 "rw_ios_per_sec": 0, 00:39:35.457 "rw_mbytes_per_sec": 0, 00:39:35.457 "r_mbytes_per_sec": 0, 00:39:35.457 "w_mbytes_per_sec": 0 00:39:35.457 }, 00:39:35.457 "claimed": true, 00:39:35.457 "claim_type": "exclusive_write", 00:39:35.457 "zoned": false, 00:39:35.457 "supported_io_types": { 00:39:35.457 "read": true, 00:39:35.457 "write": true, 00:39:35.457 "unmap": true, 00:39:35.457 "flush": true, 00:39:35.457 "reset": true, 00:39:35.457 "nvme_admin": false, 00:39:35.457 "nvme_io": false, 00:39:35.457 "nvme_io_md": false, 00:39:35.457 "write_zeroes": true, 00:39:35.457 "zcopy": true, 00:39:35.457 "get_zone_info": false, 00:39:35.457 "zone_management": false, 00:39:35.457 "zone_append": false, 00:39:35.457 "compare": false, 00:39:35.457 "compare_and_write": false, 00:39:35.457 "abort": true, 00:39:35.457 "seek_hole": false, 00:39:35.457 "seek_data": false, 00:39:35.457 "copy": true, 00:39:35.457 "nvme_iov_md": false 00:39:35.457 }, 00:39:35.457 "memory_domains": [ 00:39:35.457 { 00:39:35.457 "dma_device_id": "system", 00:39:35.457 "dma_device_type": 1 00:39:35.457 }, 00:39:35.457 { 00:39:35.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:35.457 "dma_device_type": 2 00:39:35.457 } 00:39:35.457 ], 00:39:35.457 "driver_specific": {} 00:39:35.457 } 00:39:35.457 ] 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:35.457 23:21:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.457 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:35.457 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.457 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:35.457 "name": "Existed_Raid", 00:39:35.457 "uuid": "c3bb6194-2188-44b3-b7a3-7a316509dc22", 00:39:35.457 "strip_size_kb": 64, 00:39:35.457 "state": "online", 00:39:35.457 "raid_level": "raid5f", 00:39:35.457 "superblock": false, 00:39:35.457 "num_base_bdevs": 3, 00:39:35.457 "num_base_bdevs_discovered": 3, 00:39:35.457 "num_base_bdevs_operational": 3, 00:39:35.457 "base_bdevs_list": [ 00:39:35.457 { 00:39:35.457 "name": "NewBaseBdev", 00:39:35.457 "uuid": "a62d5496-58da-4d78-9201-3dc50bb4e6cb", 00:39:35.457 "is_configured": true, 00:39:35.457 "data_offset": 0, 00:39:35.457 "data_size": 65536 00:39:35.457 }, 00:39:35.457 { 00:39:35.457 "name": "BaseBdev2", 00:39:35.457 "uuid": "97f15aaf-7998-4976-941b-925ec2c0f61c", 00:39:35.457 "is_configured": true, 00:39:35.457 "data_offset": 0, 00:39:35.457 "data_size": 65536 00:39:35.457 }, 00:39:35.457 { 00:39:35.457 "name": "BaseBdev3", 00:39:35.457 "uuid": "eb70a903-3f4c-4892-9b16-5d487fed80ec", 00:39:35.457 "is_configured": true, 00:39:35.457 "data_offset": 0, 00:39:35.457 "data_size": 65536 00:39:35.457 } 00:39:35.457 ] 00:39:35.457 }' 00:39:35.457 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:35.457 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:36.026 [2024-12-09 23:21:16.418743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:36.026 "name": "Existed_Raid", 00:39:36.026 "aliases": [ 00:39:36.026 "c3bb6194-2188-44b3-b7a3-7a316509dc22" 00:39:36.026 ], 00:39:36.026 "product_name": "Raid Volume", 00:39:36.026 "block_size": 512, 00:39:36.026 "num_blocks": 131072, 00:39:36.026 "uuid": "c3bb6194-2188-44b3-b7a3-7a316509dc22", 00:39:36.026 "assigned_rate_limits": { 00:39:36.026 "rw_ios_per_sec": 0, 00:39:36.026 "rw_mbytes_per_sec": 0, 00:39:36.026 "r_mbytes_per_sec": 0, 00:39:36.026 "w_mbytes_per_sec": 0 00:39:36.026 }, 00:39:36.026 "claimed": false, 00:39:36.026 "zoned": false, 00:39:36.026 "supported_io_types": { 00:39:36.026 "read": true, 00:39:36.026 "write": true, 00:39:36.026 "unmap": false, 00:39:36.026 "flush": false, 00:39:36.026 "reset": true, 00:39:36.026 "nvme_admin": false, 00:39:36.026 "nvme_io": false, 00:39:36.026 "nvme_io_md": false, 00:39:36.026 "write_zeroes": true, 00:39:36.026 "zcopy": false, 00:39:36.026 "get_zone_info": false, 00:39:36.026 "zone_management": false, 00:39:36.026 "zone_append": false, 00:39:36.026 "compare": false, 00:39:36.026 "compare_and_write": false, 00:39:36.026 "abort": false, 00:39:36.026 "seek_hole": false, 00:39:36.026 "seek_data": false, 00:39:36.026 "copy": false, 00:39:36.026 "nvme_iov_md": false 00:39:36.026 }, 00:39:36.026 "driver_specific": { 00:39:36.026 "raid": { 00:39:36.026 "uuid": "c3bb6194-2188-44b3-b7a3-7a316509dc22", 00:39:36.026 "strip_size_kb": 64, 00:39:36.026 "state": "online", 00:39:36.026 "raid_level": "raid5f", 00:39:36.026 "superblock": false, 00:39:36.026 "num_base_bdevs": 3, 00:39:36.026 "num_base_bdevs_discovered": 3, 00:39:36.026 "num_base_bdevs_operational": 3, 00:39:36.026 "base_bdevs_list": [ 00:39:36.026 { 00:39:36.026 "name": "NewBaseBdev", 00:39:36.026 "uuid": "a62d5496-58da-4d78-9201-3dc50bb4e6cb", 00:39:36.026 "is_configured": true, 00:39:36.026 "data_offset": 0, 00:39:36.026 "data_size": 65536 00:39:36.026 }, 00:39:36.026 { 00:39:36.026 "name": "BaseBdev2", 00:39:36.026 "uuid": "97f15aaf-7998-4976-941b-925ec2c0f61c", 00:39:36.026 "is_configured": true, 00:39:36.026 "data_offset": 0, 00:39:36.026 "data_size": 65536 00:39:36.026 }, 00:39:36.026 { 00:39:36.026 "name": "BaseBdev3", 00:39:36.026 "uuid": "eb70a903-3f4c-4892-9b16-5d487fed80ec", 00:39:36.026 "is_configured": true, 00:39:36.026 "data_offset": 0, 00:39:36.026 "data_size": 65536 00:39:36.026 } 00:39:36.026 ] 00:39:36.026 } 00:39:36.026 } 00:39:36.026 }' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:39:36.026 BaseBdev2 00:39:36.026 BaseBdev3' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:36.026 [2024-12-09 23:21:16.654579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:36.026 [2024-12-09 23:21:16.654615] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:36.026 [2024-12-09 23:21:16.654706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:36.026 [2024-12-09 23:21:16.655003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:36.026 [2024-12-09 23:21:16.655029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79774 00:39:36.026 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79774 ']' 00:39:36.285 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79774 00:39:36.285 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:39:36.285 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:36.285 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79774 00:39:36.285 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:36.285 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:36.285 killing process with pid 79774 00:39:36.285 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79774' 00:39:36.285 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79774 00:39:36.285 [2024-12-09 23:21:16.702829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:36.285 23:21:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79774 00:39:36.543 [2024-12-09 23:21:17.015158] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:39:37.919 00:39:37.919 real 0m10.446s 00:39:37.919 user 0m16.524s 00:39:37.919 sys 0m2.151s 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:39:37.919 ************************************ 00:39:37.919 END TEST raid5f_state_function_test 00:39:37.919 ************************************ 00:39:37.919 23:21:18 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:39:37.919 23:21:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:37.919 23:21:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:37.919 23:21:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:37.919 ************************************ 00:39:37.919 START TEST raid5f_state_function_test_sb 00:39:37.919 ************************************ 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80395 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80395' 00:39:37.919 Process raid pid: 80395 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80395 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80395 ']' 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.919 23:21:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:37.919 [2024-12-09 23:21:18.350003] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:37.919 [2024-12-09 23:21:18.350132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.919 [2024-12-09 23:21:18.533171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.175 [2024-12-09 23:21:18.656895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.434 [2024-12-09 23:21:18.879359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:38.434 [2024-12-09 23:21:18.879422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:38.696 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:38.697 [2024-12-09 23:21:19.201610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:38.697 [2024-12-09 23:21:19.201688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:38.697 [2024-12-09 23:21:19.201701] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:38.697 [2024-12-09 23:21:19.201713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:38.697 [2024-12-09 23:21:19.201744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:38.697 [2024-12-09 23:21:19.201757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:38.697 "name": "Existed_Raid", 00:39:38.697 "uuid": "5fab49d3-f99c-48ff-92c1-af1de9b4c44b", 00:39:38.697 "strip_size_kb": 64, 00:39:38.697 "state": "configuring", 00:39:38.697 "raid_level": "raid5f", 00:39:38.697 "superblock": true, 00:39:38.697 "num_base_bdevs": 3, 00:39:38.697 "num_base_bdevs_discovered": 0, 00:39:38.697 "num_base_bdevs_operational": 3, 00:39:38.697 "base_bdevs_list": [ 00:39:38.697 { 00:39:38.697 "name": "BaseBdev1", 00:39:38.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:38.697 "is_configured": false, 00:39:38.697 "data_offset": 0, 00:39:38.697 "data_size": 0 00:39:38.697 }, 00:39:38.697 { 00:39:38.697 "name": "BaseBdev2", 00:39:38.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:38.697 "is_configured": false, 00:39:38.697 "data_offset": 0, 00:39:38.697 "data_size": 0 00:39:38.697 }, 00:39:38.697 { 00:39:38.697 "name": "BaseBdev3", 00:39:38.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:38.697 "is_configured": false, 00:39:38.697 "data_offset": 0, 00:39:38.697 "data_size": 0 00:39:38.697 } 00:39:38.697 ] 00:39:38.697 }' 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:38.697 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.263 [2024-12-09 23:21:19.612958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:39.263 [2024-12-09 23:21:19.613003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.263 [2024-12-09 23:21:19.624983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:39.263 [2024-12-09 23:21:19.625047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:39.263 [2024-12-09 23:21:19.625057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:39.263 [2024-12-09 23:21:19.625070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:39.263 [2024-12-09 23:21:19.625077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:39.263 [2024-12-09 23:21:19.625090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.263 [2024-12-09 23:21:19.672827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:39.263 BaseBdev1 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:39.263 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.264 [ 00:39:39.264 { 00:39:39.264 "name": "BaseBdev1", 00:39:39.264 "aliases": [ 00:39:39.264 "2a6ca105-7655-4513-9595-b621c5738b47" 00:39:39.264 ], 00:39:39.264 "product_name": "Malloc disk", 00:39:39.264 "block_size": 512, 00:39:39.264 "num_blocks": 65536, 00:39:39.264 "uuid": "2a6ca105-7655-4513-9595-b621c5738b47", 00:39:39.264 "assigned_rate_limits": { 00:39:39.264 "rw_ios_per_sec": 0, 00:39:39.264 "rw_mbytes_per_sec": 0, 00:39:39.264 "r_mbytes_per_sec": 0, 00:39:39.264 "w_mbytes_per_sec": 0 00:39:39.264 }, 00:39:39.264 "claimed": true, 00:39:39.264 "claim_type": "exclusive_write", 00:39:39.264 "zoned": false, 00:39:39.264 "supported_io_types": { 00:39:39.264 "read": true, 00:39:39.264 "write": true, 00:39:39.264 "unmap": true, 00:39:39.264 "flush": true, 00:39:39.264 "reset": true, 00:39:39.264 "nvme_admin": false, 00:39:39.264 "nvme_io": false, 00:39:39.264 "nvme_io_md": false, 00:39:39.264 "write_zeroes": true, 00:39:39.264 "zcopy": true, 00:39:39.264 "get_zone_info": false, 00:39:39.264 "zone_management": false, 00:39:39.264 "zone_append": false, 00:39:39.264 "compare": false, 00:39:39.264 "compare_and_write": false, 00:39:39.264 "abort": true, 00:39:39.264 "seek_hole": false, 00:39:39.264 "seek_data": false, 00:39:39.264 "copy": true, 00:39:39.264 "nvme_iov_md": false 00:39:39.264 }, 00:39:39.264 "memory_domains": [ 00:39:39.264 { 00:39:39.264 "dma_device_id": "system", 00:39:39.264 "dma_device_type": 1 00:39:39.264 }, 00:39:39.264 { 00:39:39.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:39.264 "dma_device_type": 2 00:39:39.264 } 00:39:39.264 ], 00:39:39.264 "driver_specific": {} 00:39:39.264 } 00:39:39.264 ] 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:39.264 "name": "Existed_Raid", 00:39:39.264 "uuid": "258fd248-6e9a-4445-8466-29e1b0757cfe", 00:39:39.264 "strip_size_kb": 64, 00:39:39.264 "state": "configuring", 00:39:39.264 "raid_level": "raid5f", 00:39:39.264 "superblock": true, 00:39:39.264 "num_base_bdevs": 3, 00:39:39.264 "num_base_bdevs_discovered": 1, 00:39:39.264 "num_base_bdevs_operational": 3, 00:39:39.264 "base_bdevs_list": [ 00:39:39.264 { 00:39:39.264 "name": "BaseBdev1", 00:39:39.264 "uuid": "2a6ca105-7655-4513-9595-b621c5738b47", 00:39:39.264 "is_configured": true, 00:39:39.264 "data_offset": 2048, 00:39:39.264 "data_size": 63488 00:39:39.264 }, 00:39:39.264 { 00:39:39.264 "name": "BaseBdev2", 00:39:39.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:39.264 "is_configured": false, 00:39:39.264 "data_offset": 0, 00:39:39.264 "data_size": 0 00:39:39.264 }, 00:39:39.264 { 00:39:39.264 "name": "BaseBdev3", 00:39:39.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:39.264 "is_configured": false, 00:39:39.264 "data_offset": 0, 00:39:39.264 "data_size": 0 00:39:39.264 } 00:39:39.264 ] 00:39:39.264 }' 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:39.264 23:21:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.523 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:39.523 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.523 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.523 [2024-12-09 23:21:20.152302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:39.523 [2024-12-09 23:21:20.152362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:39:39.523 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.523 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:39:39.523 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.523 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.782 [2024-12-09 23:21:20.164343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:39.782 [2024-12-09 23:21:20.166465] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:39.782 [2024-12-09 23:21:20.166509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:39.782 [2024-12-09 23:21:20.166522] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:39.782 [2024-12-09 23:21:20.166535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:39.782 "name": "Existed_Raid", 00:39:39.782 "uuid": "6acc6315-6c45-42b8-ba1d-5127fe90ac1f", 00:39:39.782 "strip_size_kb": 64, 00:39:39.782 "state": "configuring", 00:39:39.782 "raid_level": "raid5f", 00:39:39.782 "superblock": true, 00:39:39.782 "num_base_bdevs": 3, 00:39:39.782 "num_base_bdevs_discovered": 1, 00:39:39.782 "num_base_bdevs_operational": 3, 00:39:39.782 "base_bdevs_list": [ 00:39:39.782 { 00:39:39.782 "name": "BaseBdev1", 00:39:39.782 "uuid": "2a6ca105-7655-4513-9595-b621c5738b47", 00:39:39.782 "is_configured": true, 00:39:39.782 "data_offset": 2048, 00:39:39.782 "data_size": 63488 00:39:39.782 }, 00:39:39.782 { 00:39:39.782 "name": "BaseBdev2", 00:39:39.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:39.782 "is_configured": false, 00:39:39.782 "data_offset": 0, 00:39:39.782 "data_size": 0 00:39:39.782 }, 00:39:39.782 { 00:39:39.782 "name": "BaseBdev3", 00:39:39.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:39.782 "is_configured": false, 00:39:39.782 "data_offset": 0, 00:39:39.782 "data_size": 0 00:39:39.782 } 00:39:39.782 ] 00:39:39.782 }' 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:39.782 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.041 [2024-12-09 23:21:20.672171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:40.041 BaseBdev2 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:40.041 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.299 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.299 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.299 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:40.299 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.299 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.299 [ 00:39:40.299 { 00:39:40.299 "name": "BaseBdev2", 00:39:40.299 "aliases": [ 00:39:40.299 "7a75dd64-b982-40a9-85d6-2b36f884cc58" 00:39:40.299 ], 00:39:40.299 "product_name": "Malloc disk", 00:39:40.299 "block_size": 512, 00:39:40.299 "num_blocks": 65536, 00:39:40.299 "uuid": "7a75dd64-b982-40a9-85d6-2b36f884cc58", 00:39:40.299 "assigned_rate_limits": { 00:39:40.299 "rw_ios_per_sec": 0, 00:39:40.299 "rw_mbytes_per_sec": 0, 00:39:40.299 "r_mbytes_per_sec": 0, 00:39:40.299 "w_mbytes_per_sec": 0 00:39:40.299 }, 00:39:40.299 "claimed": true, 00:39:40.299 "claim_type": "exclusive_write", 00:39:40.299 "zoned": false, 00:39:40.299 "supported_io_types": { 00:39:40.299 "read": true, 00:39:40.299 "write": true, 00:39:40.299 "unmap": true, 00:39:40.299 "flush": true, 00:39:40.299 "reset": true, 00:39:40.299 "nvme_admin": false, 00:39:40.299 "nvme_io": false, 00:39:40.299 "nvme_io_md": false, 00:39:40.299 "write_zeroes": true, 00:39:40.299 "zcopy": true, 00:39:40.299 "get_zone_info": false, 00:39:40.299 "zone_management": false, 00:39:40.299 "zone_append": false, 00:39:40.299 "compare": false, 00:39:40.299 "compare_and_write": false, 00:39:40.299 "abort": true, 00:39:40.299 "seek_hole": false, 00:39:40.299 "seek_data": false, 00:39:40.299 "copy": true, 00:39:40.299 "nvme_iov_md": false 00:39:40.299 }, 00:39:40.299 "memory_domains": [ 00:39:40.299 { 00:39:40.299 "dma_device_id": "system", 00:39:40.299 "dma_device_type": 1 00:39:40.299 }, 00:39:40.299 { 00:39:40.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:40.299 "dma_device_type": 2 00:39:40.300 } 00:39:40.300 ], 00:39:40.300 "driver_specific": {} 00:39:40.300 } 00:39:40.300 ] 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:40.300 "name": "Existed_Raid", 00:39:40.300 "uuid": "6acc6315-6c45-42b8-ba1d-5127fe90ac1f", 00:39:40.300 "strip_size_kb": 64, 00:39:40.300 "state": "configuring", 00:39:40.300 "raid_level": "raid5f", 00:39:40.300 "superblock": true, 00:39:40.300 "num_base_bdevs": 3, 00:39:40.300 "num_base_bdevs_discovered": 2, 00:39:40.300 "num_base_bdevs_operational": 3, 00:39:40.300 "base_bdevs_list": [ 00:39:40.300 { 00:39:40.300 "name": "BaseBdev1", 00:39:40.300 "uuid": "2a6ca105-7655-4513-9595-b621c5738b47", 00:39:40.300 "is_configured": true, 00:39:40.300 "data_offset": 2048, 00:39:40.300 "data_size": 63488 00:39:40.300 }, 00:39:40.300 { 00:39:40.300 "name": "BaseBdev2", 00:39:40.300 "uuid": "7a75dd64-b982-40a9-85d6-2b36f884cc58", 00:39:40.300 "is_configured": true, 00:39:40.300 "data_offset": 2048, 00:39:40.300 "data_size": 63488 00:39:40.300 }, 00:39:40.300 { 00:39:40.300 "name": "BaseBdev3", 00:39:40.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:40.300 "is_configured": false, 00:39:40.300 "data_offset": 0, 00:39:40.300 "data_size": 0 00:39:40.300 } 00:39:40.300 ] 00:39:40.300 }' 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:40.300 23:21:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.558 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:39:40.558 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.558 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.877 [2024-12-09 23:21:21.199381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:40.877 [2024-12-09 23:21:21.199698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:39:40.877 [2024-12-09 23:21:21.199723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:39:40.877 [2024-12-09 23:21:21.200011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:40.877 BaseBdev3 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.877 [2024-12-09 23:21:21.205719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:39:40.877 [2024-12-09 23:21:21.205743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:39:40.877 [2024-12-09 23:21:21.205906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.877 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.877 [ 00:39:40.877 { 00:39:40.877 "name": "BaseBdev3", 00:39:40.877 "aliases": [ 00:39:40.877 "91faa306-ed17-4cd0-9277-2fcd76aebdb0" 00:39:40.877 ], 00:39:40.877 "product_name": "Malloc disk", 00:39:40.877 "block_size": 512, 00:39:40.877 "num_blocks": 65536, 00:39:40.877 "uuid": "91faa306-ed17-4cd0-9277-2fcd76aebdb0", 00:39:40.877 "assigned_rate_limits": { 00:39:40.877 "rw_ios_per_sec": 0, 00:39:40.877 "rw_mbytes_per_sec": 0, 00:39:40.877 "r_mbytes_per_sec": 0, 00:39:40.877 "w_mbytes_per_sec": 0 00:39:40.877 }, 00:39:40.877 "claimed": true, 00:39:40.877 "claim_type": "exclusive_write", 00:39:40.877 "zoned": false, 00:39:40.877 "supported_io_types": { 00:39:40.877 "read": true, 00:39:40.877 "write": true, 00:39:40.877 "unmap": true, 00:39:40.877 "flush": true, 00:39:40.877 "reset": true, 00:39:40.877 "nvme_admin": false, 00:39:40.877 "nvme_io": false, 00:39:40.877 "nvme_io_md": false, 00:39:40.877 "write_zeroes": true, 00:39:40.877 "zcopy": true, 00:39:40.877 "get_zone_info": false, 00:39:40.877 "zone_management": false, 00:39:40.877 "zone_append": false, 00:39:40.877 "compare": false, 00:39:40.877 "compare_and_write": false, 00:39:40.877 "abort": true, 00:39:40.877 "seek_hole": false, 00:39:40.877 "seek_data": false, 00:39:40.877 "copy": true, 00:39:40.877 "nvme_iov_md": false 00:39:40.877 }, 00:39:40.877 "memory_domains": [ 00:39:40.877 { 00:39:40.877 "dma_device_id": "system", 00:39:40.877 "dma_device_type": 1 00:39:40.877 }, 00:39:40.877 { 00:39:40.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:40.877 "dma_device_type": 2 00:39:40.877 } 00:39:40.877 ], 00:39:40.877 "driver_specific": {} 00:39:40.877 } 00:39:40.877 ] 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:40.878 "name": "Existed_Raid", 00:39:40.878 "uuid": "6acc6315-6c45-42b8-ba1d-5127fe90ac1f", 00:39:40.878 "strip_size_kb": 64, 00:39:40.878 "state": "online", 00:39:40.878 "raid_level": "raid5f", 00:39:40.878 "superblock": true, 00:39:40.878 "num_base_bdevs": 3, 00:39:40.878 "num_base_bdevs_discovered": 3, 00:39:40.878 "num_base_bdevs_operational": 3, 00:39:40.878 "base_bdevs_list": [ 00:39:40.878 { 00:39:40.878 "name": "BaseBdev1", 00:39:40.878 "uuid": "2a6ca105-7655-4513-9595-b621c5738b47", 00:39:40.878 "is_configured": true, 00:39:40.878 "data_offset": 2048, 00:39:40.878 "data_size": 63488 00:39:40.878 }, 00:39:40.878 { 00:39:40.878 "name": "BaseBdev2", 00:39:40.878 "uuid": "7a75dd64-b982-40a9-85d6-2b36f884cc58", 00:39:40.878 "is_configured": true, 00:39:40.878 "data_offset": 2048, 00:39:40.878 "data_size": 63488 00:39:40.878 }, 00:39:40.878 { 00:39:40.878 "name": "BaseBdev3", 00:39:40.878 "uuid": "91faa306-ed17-4cd0-9277-2fcd76aebdb0", 00:39:40.878 "is_configured": true, 00:39:40.878 "data_offset": 2048, 00:39:40.878 "data_size": 63488 00:39:40.878 } 00:39:40.878 ] 00:39:40.878 }' 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:40.878 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.137 [2024-12-09 23:21:21.655845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.137 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:41.137 "name": "Existed_Raid", 00:39:41.137 "aliases": [ 00:39:41.137 "6acc6315-6c45-42b8-ba1d-5127fe90ac1f" 00:39:41.137 ], 00:39:41.137 "product_name": "Raid Volume", 00:39:41.137 "block_size": 512, 00:39:41.137 "num_blocks": 126976, 00:39:41.137 "uuid": "6acc6315-6c45-42b8-ba1d-5127fe90ac1f", 00:39:41.137 "assigned_rate_limits": { 00:39:41.137 "rw_ios_per_sec": 0, 00:39:41.137 "rw_mbytes_per_sec": 0, 00:39:41.137 "r_mbytes_per_sec": 0, 00:39:41.137 "w_mbytes_per_sec": 0 00:39:41.137 }, 00:39:41.137 "claimed": false, 00:39:41.137 "zoned": false, 00:39:41.137 "supported_io_types": { 00:39:41.137 "read": true, 00:39:41.137 "write": true, 00:39:41.137 "unmap": false, 00:39:41.137 "flush": false, 00:39:41.137 "reset": true, 00:39:41.137 "nvme_admin": false, 00:39:41.137 "nvme_io": false, 00:39:41.137 "nvme_io_md": false, 00:39:41.137 "write_zeroes": true, 00:39:41.137 "zcopy": false, 00:39:41.137 "get_zone_info": false, 00:39:41.137 "zone_management": false, 00:39:41.137 "zone_append": false, 00:39:41.137 "compare": false, 00:39:41.137 "compare_and_write": false, 00:39:41.137 "abort": false, 00:39:41.137 "seek_hole": false, 00:39:41.137 "seek_data": false, 00:39:41.137 "copy": false, 00:39:41.137 "nvme_iov_md": false 00:39:41.137 }, 00:39:41.137 "driver_specific": { 00:39:41.137 "raid": { 00:39:41.137 "uuid": "6acc6315-6c45-42b8-ba1d-5127fe90ac1f", 00:39:41.137 "strip_size_kb": 64, 00:39:41.137 "state": "online", 00:39:41.137 "raid_level": "raid5f", 00:39:41.137 "superblock": true, 00:39:41.137 "num_base_bdevs": 3, 00:39:41.137 "num_base_bdevs_discovered": 3, 00:39:41.137 "num_base_bdevs_operational": 3, 00:39:41.137 "base_bdevs_list": [ 00:39:41.137 { 00:39:41.137 "name": "BaseBdev1", 00:39:41.137 "uuid": "2a6ca105-7655-4513-9595-b621c5738b47", 00:39:41.137 "is_configured": true, 00:39:41.137 "data_offset": 2048, 00:39:41.137 "data_size": 63488 00:39:41.138 }, 00:39:41.138 { 00:39:41.138 "name": "BaseBdev2", 00:39:41.138 "uuid": "7a75dd64-b982-40a9-85d6-2b36f884cc58", 00:39:41.138 "is_configured": true, 00:39:41.138 "data_offset": 2048, 00:39:41.138 "data_size": 63488 00:39:41.138 }, 00:39:41.138 { 00:39:41.138 "name": "BaseBdev3", 00:39:41.138 "uuid": "91faa306-ed17-4cd0-9277-2fcd76aebdb0", 00:39:41.138 "is_configured": true, 00:39:41.138 "data_offset": 2048, 00:39:41.138 "data_size": 63488 00:39:41.138 } 00:39:41.138 ] 00:39:41.138 } 00:39:41.138 } 00:39:41.138 }' 00:39:41.138 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:41.138 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:39:41.138 BaseBdev2 00:39:41.138 BaseBdev3' 00:39:41.138 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.397 23:21:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.397 [2024-12-09 23:21:21.919645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:41.397 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:41.662 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:41.663 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:41.663 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.663 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.663 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.663 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:41.663 "name": "Existed_Raid", 00:39:41.663 "uuid": "6acc6315-6c45-42b8-ba1d-5127fe90ac1f", 00:39:41.663 "strip_size_kb": 64, 00:39:41.663 "state": "online", 00:39:41.663 "raid_level": "raid5f", 00:39:41.663 "superblock": true, 00:39:41.663 "num_base_bdevs": 3, 00:39:41.663 "num_base_bdevs_discovered": 2, 00:39:41.663 "num_base_bdevs_operational": 2, 00:39:41.663 "base_bdevs_list": [ 00:39:41.663 { 00:39:41.663 "name": null, 00:39:41.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:41.663 "is_configured": false, 00:39:41.663 "data_offset": 0, 00:39:41.663 "data_size": 63488 00:39:41.663 }, 00:39:41.663 { 00:39:41.663 "name": "BaseBdev2", 00:39:41.663 "uuid": "7a75dd64-b982-40a9-85d6-2b36f884cc58", 00:39:41.663 "is_configured": true, 00:39:41.663 "data_offset": 2048, 00:39:41.663 "data_size": 63488 00:39:41.663 }, 00:39:41.663 { 00:39:41.663 "name": "BaseBdev3", 00:39:41.663 "uuid": "91faa306-ed17-4cd0-9277-2fcd76aebdb0", 00:39:41.663 "is_configured": true, 00:39:41.663 "data_offset": 2048, 00:39:41.663 "data_size": 63488 00:39:41.663 } 00:39:41.663 ] 00:39:41.663 }' 00:39:41.663 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:41.663 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.922 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:41.922 [2024-12-09 23:21:22.496767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:41.922 [2024-12-09 23:21:22.496984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:42.181 [2024-12-09 23:21:22.602264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.181 [2024-12-09 23:21:22.654218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:42.181 [2024-12-09 23:21:22.654292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.181 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.440 BaseBdev2 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.440 [ 00:39:42.440 { 00:39:42.440 "name": "BaseBdev2", 00:39:42.440 "aliases": [ 00:39:42.440 "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e" 00:39:42.440 ], 00:39:42.440 "product_name": "Malloc disk", 00:39:42.440 "block_size": 512, 00:39:42.440 "num_blocks": 65536, 00:39:42.440 "uuid": "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e", 00:39:42.440 "assigned_rate_limits": { 00:39:42.440 "rw_ios_per_sec": 0, 00:39:42.440 "rw_mbytes_per_sec": 0, 00:39:42.440 "r_mbytes_per_sec": 0, 00:39:42.440 "w_mbytes_per_sec": 0 00:39:42.440 }, 00:39:42.440 "claimed": false, 00:39:42.440 "zoned": false, 00:39:42.440 "supported_io_types": { 00:39:42.440 "read": true, 00:39:42.440 "write": true, 00:39:42.440 "unmap": true, 00:39:42.440 "flush": true, 00:39:42.440 "reset": true, 00:39:42.440 "nvme_admin": false, 00:39:42.440 "nvme_io": false, 00:39:42.440 "nvme_io_md": false, 00:39:42.440 "write_zeroes": true, 00:39:42.440 "zcopy": true, 00:39:42.440 "get_zone_info": false, 00:39:42.440 "zone_management": false, 00:39:42.440 "zone_append": false, 00:39:42.440 "compare": false, 00:39:42.440 "compare_and_write": false, 00:39:42.440 "abort": true, 00:39:42.440 "seek_hole": false, 00:39:42.440 "seek_data": false, 00:39:42.440 "copy": true, 00:39:42.440 "nvme_iov_md": false 00:39:42.440 }, 00:39:42.440 "memory_domains": [ 00:39:42.440 { 00:39:42.440 "dma_device_id": "system", 00:39:42.440 "dma_device_type": 1 00:39:42.440 }, 00:39:42.440 { 00:39:42.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:42.440 "dma_device_type": 2 00:39:42.440 } 00:39:42.440 ], 00:39:42.440 "driver_specific": {} 00:39:42.440 } 00:39:42.440 ] 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.440 BaseBdev3 00:39:42.440 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.441 [ 00:39:42.441 { 00:39:42.441 "name": "BaseBdev3", 00:39:42.441 "aliases": [ 00:39:42.441 "a8fa33b4-398f-4e7a-8c48-ab7297c05070" 00:39:42.441 ], 00:39:42.441 "product_name": "Malloc disk", 00:39:42.441 "block_size": 512, 00:39:42.441 "num_blocks": 65536, 00:39:42.441 "uuid": "a8fa33b4-398f-4e7a-8c48-ab7297c05070", 00:39:42.441 "assigned_rate_limits": { 00:39:42.441 "rw_ios_per_sec": 0, 00:39:42.441 "rw_mbytes_per_sec": 0, 00:39:42.441 "r_mbytes_per_sec": 0, 00:39:42.441 "w_mbytes_per_sec": 0 00:39:42.441 }, 00:39:42.441 "claimed": false, 00:39:42.441 "zoned": false, 00:39:42.441 "supported_io_types": { 00:39:42.441 "read": true, 00:39:42.441 "write": true, 00:39:42.441 "unmap": true, 00:39:42.441 "flush": true, 00:39:42.441 "reset": true, 00:39:42.441 "nvme_admin": false, 00:39:42.441 "nvme_io": false, 00:39:42.441 "nvme_io_md": false, 00:39:42.441 "write_zeroes": true, 00:39:42.441 "zcopy": true, 00:39:42.441 "get_zone_info": false, 00:39:42.441 "zone_management": false, 00:39:42.441 "zone_append": false, 00:39:42.441 "compare": false, 00:39:42.441 "compare_and_write": false, 00:39:42.441 "abort": true, 00:39:42.441 "seek_hole": false, 00:39:42.441 "seek_data": false, 00:39:42.441 "copy": true, 00:39:42.441 "nvme_iov_md": false 00:39:42.441 }, 00:39:42.441 "memory_domains": [ 00:39:42.441 { 00:39:42.441 "dma_device_id": "system", 00:39:42.441 "dma_device_type": 1 00:39:42.441 }, 00:39:42.441 { 00:39:42.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:42.441 "dma_device_type": 2 00:39:42.441 } 00:39:42.441 ], 00:39:42.441 "driver_specific": {} 00:39:42.441 } 00:39:42.441 ] 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.441 23:21:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.441 [2024-12-09 23:21:23.004228] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:42.441 [2024-12-09 23:21:23.004294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:42.441 [2024-12-09 23:21:23.004324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:42.441 [2024-12-09 23:21:23.006777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:42.441 "name": "Existed_Raid", 00:39:42.441 "uuid": "8637c381-b529-4d4b-b77f-d13e143277f8", 00:39:42.441 "strip_size_kb": 64, 00:39:42.441 "state": "configuring", 00:39:42.441 "raid_level": "raid5f", 00:39:42.441 "superblock": true, 00:39:42.441 "num_base_bdevs": 3, 00:39:42.441 "num_base_bdevs_discovered": 2, 00:39:42.441 "num_base_bdevs_operational": 3, 00:39:42.441 "base_bdevs_list": [ 00:39:42.441 { 00:39:42.441 "name": "BaseBdev1", 00:39:42.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:42.441 "is_configured": false, 00:39:42.441 "data_offset": 0, 00:39:42.441 "data_size": 0 00:39:42.441 }, 00:39:42.441 { 00:39:42.441 "name": "BaseBdev2", 00:39:42.441 "uuid": "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e", 00:39:42.441 "is_configured": true, 00:39:42.441 "data_offset": 2048, 00:39:42.441 "data_size": 63488 00:39:42.441 }, 00:39:42.441 { 00:39:42.441 "name": "BaseBdev3", 00:39:42.441 "uuid": "a8fa33b4-398f-4e7a-8c48-ab7297c05070", 00:39:42.441 "is_configured": true, 00:39:42.441 "data_offset": 2048, 00:39:42.441 "data_size": 63488 00:39:42.441 } 00:39:42.441 ] 00:39:42.441 }' 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:42.441 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.035 [2024-12-09 23:21:23.455646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:43.035 "name": "Existed_Raid", 00:39:43.035 "uuid": "8637c381-b529-4d4b-b77f-d13e143277f8", 00:39:43.035 "strip_size_kb": 64, 00:39:43.035 "state": "configuring", 00:39:43.035 "raid_level": "raid5f", 00:39:43.035 "superblock": true, 00:39:43.035 "num_base_bdevs": 3, 00:39:43.035 "num_base_bdevs_discovered": 1, 00:39:43.035 "num_base_bdevs_operational": 3, 00:39:43.035 "base_bdevs_list": [ 00:39:43.035 { 00:39:43.035 "name": "BaseBdev1", 00:39:43.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:43.035 "is_configured": false, 00:39:43.035 "data_offset": 0, 00:39:43.035 "data_size": 0 00:39:43.035 }, 00:39:43.035 { 00:39:43.035 "name": null, 00:39:43.035 "uuid": "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e", 00:39:43.035 "is_configured": false, 00:39:43.035 "data_offset": 0, 00:39:43.035 "data_size": 63488 00:39:43.035 }, 00:39:43.035 { 00:39:43.035 "name": "BaseBdev3", 00:39:43.035 "uuid": "a8fa33b4-398f-4e7a-8c48-ab7297c05070", 00:39:43.035 "is_configured": true, 00:39:43.035 "data_offset": 2048, 00:39:43.035 "data_size": 63488 00:39:43.035 } 00:39:43.035 ] 00:39:43.035 }' 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:43.035 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.304 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.304 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:39:43.304 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.304 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.304 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.304 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:39:43.304 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:39:43.304 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.304 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.562 [2024-12-09 23:21:23.948133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:43.562 BaseBdev1 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.562 [ 00:39:43.562 { 00:39:43.562 "name": "BaseBdev1", 00:39:43.562 "aliases": [ 00:39:43.562 "dd778d66-a5b4-4f70-b94f-a9dd2558dd51" 00:39:43.562 ], 00:39:43.562 "product_name": "Malloc disk", 00:39:43.562 "block_size": 512, 00:39:43.562 "num_blocks": 65536, 00:39:43.562 "uuid": "dd778d66-a5b4-4f70-b94f-a9dd2558dd51", 00:39:43.562 "assigned_rate_limits": { 00:39:43.562 "rw_ios_per_sec": 0, 00:39:43.562 "rw_mbytes_per_sec": 0, 00:39:43.562 "r_mbytes_per_sec": 0, 00:39:43.562 "w_mbytes_per_sec": 0 00:39:43.562 }, 00:39:43.562 "claimed": true, 00:39:43.562 "claim_type": "exclusive_write", 00:39:43.562 "zoned": false, 00:39:43.562 "supported_io_types": { 00:39:43.562 "read": true, 00:39:43.562 "write": true, 00:39:43.562 "unmap": true, 00:39:43.562 "flush": true, 00:39:43.562 "reset": true, 00:39:43.562 "nvme_admin": false, 00:39:43.562 "nvme_io": false, 00:39:43.562 "nvme_io_md": false, 00:39:43.562 "write_zeroes": true, 00:39:43.562 "zcopy": true, 00:39:43.562 "get_zone_info": false, 00:39:43.562 "zone_management": false, 00:39:43.562 "zone_append": false, 00:39:43.562 "compare": false, 00:39:43.562 "compare_and_write": false, 00:39:43.562 "abort": true, 00:39:43.562 "seek_hole": false, 00:39:43.562 "seek_data": false, 00:39:43.562 "copy": true, 00:39:43.562 "nvme_iov_md": false 00:39:43.562 }, 00:39:43.562 "memory_domains": [ 00:39:43.562 { 00:39:43.562 "dma_device_id": "system", 00:39:43.562 "dma_device_type": 1 00:39:43.562 }, 00:39:43.562 { 00:39:43.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:43.562 "dma_device_type": 2 00:39:43.562 } 00:39:43.562 ], 00:39:43.562 "driver_specific": {} 00:39:43.562 } 00:39:43.562 ] 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.562 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.563 23:21:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.563 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.563 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:43.563 "name": "Existed_Raid", 00:39:43.563 "uuid": "8637c381-b529-4d4b-b77f-d13e143277f8", 00:39:43.563 "strip_size_kb": 64, 00:39:43.563 "state": "configuring", 00:39:43.563 "raid_level": "raid5f", 00:39:43.563 "superblock": true, 00:39:43.563 "num_base_bdevs": 3, 00:39:43.563 "num_base_bdevs_discovered": 2, 00:39:43.563 "num_base_bdevs_operational": 3, 00:39:43.563 "base_bdevs_list": [ 00:39:43.563 { 00:39:43.563 "name": "BaseBdev1", 00:39:43.563 "uuid": "dd778d66-a5b4-4f70-b94f-a9dd2558dd51", 00:39:43.563 "is_configured": true, 00:39:43.563 "data_offset": 2048, 00:39:43.563 "data_size": 63488 00:39:43.563 }, 00:39:43.563 { 00:39:43.563 "name": null, 00:39:43.563 "uuid": "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e", 00:39:43.563 "is_configured": false, 00:39:43.563 "data_offset": 0, 00:39:43.563 "data_size": 63488 00:39:43.563 }, 00:39:43.563 { 00:39:43.563 "name": "BaseBdev3", 00:39:43.563 "uuid": "a8fa33b4-398f-4e7a-8c48-ab7297c05070", 00:39:43.563 "is_configured": true, 00:39:43.563 "data_offset": 2048, 00:39:43.563 "data_size": 63488 00:39:43.563 } 00:39:43.563 ] 00:39:43.563 }' 00:39:43.563 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:43.563 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:43.822 [2024-12-09 23:21:24.439587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.822 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.081 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.081 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:44.081 "name": "Existed_Raid", 00:39:44.081 "uuid": "8637c381-b529-4d4b-b77f-d13e143277f8", 00:39:44.081 "strip_size_kb": 64, 00:39:44.081 "state": "configuring", 00:39:44.081 "raid_level": "raid5f", 00:39:44.081 "superblock": true, 00:39:44.081 "num_base_bdevs": 3, 00:39:44.081 "num_base_bdevs_discovered": 1, 00:39:44.081 "num_base_bdevs_operational": 3, 00:39:44.081 "base_bdevs_list": [ 00:39:44.081 { 00:39:44.081 "name": "BaseBdev1", 00:39:44.081 "uuid": "dd778d66-a5b4-4f70-b94f-a9dd2558dd51", 00:39:44.081 "is_configured": true, 00:39:44.081 "data_offset": 2048, 00:39:44.081 "data_size": 63488 00:39:44.081 }, 00:39:44.081 { 00:39:44.081 "name": null, 00:39:44.081 "uuid": "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e", 00:39:44.081 "is_configured": false, 00:39:44.081 "data_offset": 0, 00:39:44.081 "data_size": 63488 00:39:44.081 }, 00:39:44.081 { 00:39:44.081 "name": null, 00:39:44.081 "uuid": "a8fa33b4-398f-4e7a-8c48-ab7297c05070", 00:39:44.081 "is_configured": false, 00:39:44.081 "data_offset": 0, 00:39:44.081 "data_size": 63488 00:39:44.081 } 00:39:44.081 ] 00:39:44.081 }' 00:39:44.081 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:44.081 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.340 [2024-12-09 23:21:24.915609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:44.340 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:44.341 "name": "Existed_Raid", 00:39:44.341 "uuid": "8637c381-b529-4d4b-b77f-d13e143277f8", 00:39:44.341 "strip_size_kb": 64, 00:39:44.341 "state": "configuring", 00:39:44.341 "raid_level": "raid5f", 00:39:44.341 "superblock": true, 00:39:44.341 "num_base_bdevs": 3, 00:39:44.341 "num_base_bdevs_discovered": 2, 00:39:44.341 "num_base_bdevs_operational": 3, 00:39:44.341 "base_bdevs_list": [ 00:39:44.341 { 00:39:44.341 "name": "BaseBdev1", 00:39:44.341 "uuid": "dd778d66-a5b4-4f70-b94f-a9dd2558dd51", 00:39:44.341 "is_configured": true, 00:39:44.341 "data_offset": 2048, 00:39:44.341 "data_size": 63488 00:39:44.341 }, 00:39:44.341 { 00:39:44.341 "name": null, 00:39:44.341 "uuid": "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e", 00:39:44.341 "is_configured": false, 00:39:44.341 "data_offset": 0, 00:39:44.341 "data_size": 63488 00:39:44.341 }, 00:39:44.341 { 00:39:44.341 "name": "BaseBdev3", 00:39:44.341 "uuid": "a8fa33b4-398f-4e7a-8c48-ab7297c05070", 00:39:44.341 "is_configured": true, 00:39:44.341 "data_offset": 2048, 00:39:44.341 "data_size": 63488 00:39:44.341 } 00:39:44.341 ] 00:39:44.341 }' 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:44.341 23:21:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.922 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.923 [2024-12-09 23:21:25.403637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.923 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:44.923 "name": "Existed_Raid", 00:39:44.923 "uuid": "8637c381-b529-4d4b-b77f-d13e143277f8", 00:39:44.923 "strip_size_kb": 64, 00:39:44.923 "state": "configuring", 00:39:44.923 "raid_level": "raid5f", 00:39:44.923 "superblock": true, 00:39:44.923 "num_base_bdevs": 3, 00:39:44.923 "num_base_bdevs_discovered": 1, 00:39:44.923 "num_base_bdevs_operational": 3, 00:39:44.923 "base_bdevs_list": [ 00:39:44.923 { 00:39:44.923 "name": null, 00:39:44.923 "uuid": "dd778d66-a5b4-4f70-b94f-a9dd2558dd51", 00:39:44.923 "is_configured": false, 00:39:44.923 "data_offset": 0, 00:39:44.923 "data_size": 63488 00:39:44.923 }, 00:39:44.923 { 00:39:44.923 "name": null, 00:39:44.923 "uuid": "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e", 00:39:44.923 "is_configured": false, 00:39:44.923 "data_offset": 0, 00:39:44.923 "data_size": 63488 00:39:44.923 }, 00:39:44.923 { 00:39:44.923 "name": "BaseBdev3", 00:39:44.923 "uuid": "a8fa33b4-398f-4e7a-8c48-ab7297c05070", 00:39:44.923 "is_configured": true, 00:39:44.923 "data_offset": 2048, 00:39:44.923 "data_size": 63488 00:39:44.923 } 00:39:44.923 ] 00:39:44.923 }' 00:39:45.182 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:45.182 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.441 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:45.441 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.441 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.441 23:21:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:39:45.441 23:21:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.441 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:39:45.441 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:39:45.441 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.441 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.442 [2024-12-09 23:21:26.012577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:45.442 "name": "Existed_Raid", 00:39:45.442 "uuid": "8637c381-b529-4d4b-b77f-d13e143277f8", 00:39:45.442 "strip_size_kb": 64, 00:39:45.442 "state": "configuring", 00:39:45.442 "raid_level": "raid5f", 00:39:45.442 "superblock": true, 00:39:45.442 "num_base_bdevs": 3, 00:39:45.442 "num_base_bdevs_discovered": 2, 00:39:45.442 "num_base_bdevs_operational": 3, 00:39:45.442 "base_bdevs_list": [ 00:39:45.442 { 00:39:45.442 "name": null, 00:39:45.442 "uuid": "dd778d66-a5b4-4f70-b94f-a9dd2558dd51", 00:39:45.442 "is_configured": false, 00:39:45.442 "data_offset": 0, 00:39:45.442 "data_size": 63488 00:39:45.442 }, 00:39:45.442 { 00:39:45.442 "name": "BaseBdev2", 00:39:45.442 "uuid": "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e", 00:39:45.442 "is_configured": true, 00:39:45.442 "data_offset": 2048, 00:39:45.442 "data_size": 63488 00:39:45.442 }, 00:39:45.442 { 00:39:45.442 "name": "BaseBdev3", 00:39:45.442 "uuid": "a8fa33b4-398f-4e7a-8c48-ab7297c05070", 00:39:45.442 "is_configured": true, 00:39:45.442 "data_offset": 2048, 00:39:45.442 "data_size": 63488 00:39:45.442 } 00:39:45.442 ] 00:39:45.442 }' 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:45.442 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.009 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:46.009 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dd778d66-a5b4-4f70-b94f-a9dd2558dd51 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.010 [2024-12-09 23:21:26.560877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:39:46.010 [2024-12-09 23:21:26.561182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:39:46.010 [2024-12-09 23:21:26.561205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:39:46.010 [2024-12-09 23:21:26.561527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:39:46.010 NewBaseBdev 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.010 [2024-12-09 23:21:26.566990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:39:46.010 [2024-12-09 23:21:26.567239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:39:46.010 [2024-12-09 23:21:26.567615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.010 [ 00:39:46.010 { 00:39:46.010 "name": "NewBaseBdev", 00:39:46.010 "aliases": [ 00:39:46.010 "dd778d66-a5b4-4f70-b94f-a9dd2558dd51" 00:39:46.010 ], 00:39:46.010 "product_name": "Malloc disk", 00:39:46.010 "block_size": 512, 00:39:46.010 "num_blocks": 65536, 00:39:46.010 "uuid": "dd778d66-a5b4-4f70-b94f-a9dd2558dd51", 00:39:46.010 "assigned_rate_limits": { 00:39:46.010 "rw_ios_per_sec": 0, 00:39:46.010 "rw_mbytes_per_sec": 0, 00:39:46.010 "r_mbytes_per_sec": 0, 00:39:46.010 "w_mbytes_per_sec": 0 00:39:46.010 }, 00:39:46.010 "claimed": true, 00:39:46.010 "claim_type": "exclusive_write", 00:39:46.010 "zoned": false, 00:39:46.010 "supported_io_types": { 00:39:46.010 "read": true, 00:39:46.010 "write": true, 00:39:46.010 "unmap": true, 00:39:46.010 "flush": true, 00:39:46.010 "reset": true, 00:39:46.010 "nvme_admin": false, 00:39:46.010 "nvme_io": false, 00:39:46.010 "nvme_io_md": false, 00:39:46.010 "write_zeroes": true, 00:39:46.010 "zcopy": true, 00:39:46.010 "get_zone_info": false, 00:39:46.010 "zone_management": false, 00:39:46.010 "zone_append": false, 00:39:46.010 "compare": false, 00:39:46.010 "compare_and_write": false, 00:39:46.010 "abort": true, 00:39:46.010 "seek_hole": false, 00:39:46.010 "seek_data": false, 00:39:46.010 "copy": true, 00:39:46.010 "nvme_iov_md": false 00:39:46.010 }, 00:39:46.010 "memory_domains": [ 00:39:46.010 { 00:39:46.010 "dma_device_id": "system", 00:39:46.010 "dma_device_type": 1 00:39:46.010 }, 00:39:46.010 { 00:39:46.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:46.010 "dma_device_type": 2 00:39:46.010 } 00:39:46.010 ], 00:39:46.010 "driver_specific": {} 00:39:46.010 } 00:39:46.010 ] 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.010 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.270 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:46.270 "name": "Existed_Raid", 00:39:46.270 "uuid": "8637c381-b529-4d4b-b77f-d13e143277f8", 00:39:46.270 "strip_size_kb": 64, 00:39:46.270 "state": "online", 00:39:46.270 "raid_level": "raid5f", 00:39:46.270 "superblock": true, 00:39:46.270 "num_base_bdevs": 3, 00:39:46.270 "num_base_bdevs_discovered": 3, 00:39:46.270 "num_base_bdevs_operational": 3, 00:39:46.270 "base_bdevs_list": [ 00:39:46.270 { 00:39:46.270 "name": "NewBaseBdev", 00:39:46.270 "uuid": "dd778d66-a5b4-4f70-b94f-a9dd2558dd51", 00:39:46.270 "is_configured": true, 00:39:46.270 "data_offset": 2048, 00:39:46.270 "data_size": 63488 00:39:46.270 }, 00:39:46.270 { 00:39:46.270 "name": "BaseBdev2", 00:39:46.270 "uuid": "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e", 00:39:46.270 "is_configured": true, 00:39:46.270 "data_offset": 2048, 00:39:46.270 "data_size": 63488 00:39:46.270 }, 00:39:46.270 { 00:39:46.270 "name": "BaseBdev3", 00:39:46.270 "uuid": "a8fa33b4-398f-4e7a-8c48-ab7297c05070", 00:39:46.270 "is_configured": true, 00:39:46.270 "data_offset": 2048, 00:39:46.270 "data_size": 63488 00:39:46.270 } 00:39:46.270 ] 00:39:46.270 }' 00:39:46.270 23:21:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:46.270 23:21:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.528 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:39:46.528 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.529 [2024-12-09 23:21:27.030840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:46.529 "name": "Existed_Raid", 00:39:46.529 "aliases": [ 00:39:46.529 "8637c381-b529-4d4b-b77f-d13e143277f8" 00:39:46.529 ], 00:39:46.529 "product_name": "Raid Volume", 00:39:46.529 "block_size": 512, 00:39:46.529 "num_blocks": 126976, 00:39:46.529 "uuid": "8637c381-b529-4d4b-b77f-d13e143277f8", 00:39:46.529 "assigned_rate_limits": { 00:39:46.529 "rw_ios_per_sec": 0, 00:39:46.529 "rw_mbytes_per_sec": 0, 00:39:46.529 "r_mbytes_per_sec": 0, 00:39:46.529 "w_mbytes_per_sec": 0 00:39:46.529 }, 00:39:46.529 "claimed": false, 00:39:46.529 "zoned": false, 00:39:46.529 "supported_io_types": { 00:39:46.529 "read": true, 00:39:46.529 "write": true, 00:39:46.529 "unmap": false, 00:39:46.529 "flush": false, 00:39:46.529 "reset": true, 00:39:46.529 "nvme_admin": false, 00:39:46.529 "nvme_io": false, 00:39:46.529 "nvme_io_md": false, 00:39:46.529 "write_zeroes": true, 00:39:46.529 "zcopy": false, 00:39:46.529 "get_zone_info": false, 00:39:46.529 "zone_management": false, 00:39:46.529 "zone_append": false, 00:39:46.529 "compare": false, 00:39:46.529 "compare_and_write": false, 00:39:46.529 "abort": false, 00:39:46.529 "seek_hole": false, 00:39:46.529 "seek_data": false, 00:39:46.529 "copy": false, 00:39:46.529 "nvme_iov_md": false 00:39:46.529 }, 00:39:46.529 "driver_specific": { 00:39:46.529 "raid": { 00:39:46.529 "uuid": "8637c381-b529-4d4b-b77f-d13e143277f8", 00:39:46.529 "strip_size_kb": 64, 00:39:46.529 "state": "online", 00:39:46.529 "raid_level": "raid5f", 00:39:46.529 "superblock": true, 00:39:46.529 "num_base_bdevs": 3, 00:39:46.529 "num_base_bdevs_discovered": 3, 00:39:46.529 "num_base_bdevs_operational": 3, 00:39:46.529 "base_bdevs_list": [ 00:39:46.529 { 00:39:46.529 "name": "NewBaseBdev", 00:39:46.529 "uuid": "dd778d66-a5b4-4f70-b94f-a9dd2558dd51", 00:39:46.529 "is_configured": true, 00:39:46.529 "data_offset": 2048, 00:39:46.529 "data_size": 63488 00:39:46.529 }, 00:39:46.529 { 00:39:46.529 "name": "BaseBdev2", 00:39:46.529 "uuid": "ed6f9972-de57-4a16-ba9f-7cd1b4765b2e", 00:39:46.529 "is_configured": true, 00:39:46.529 "data_offset": 2048, 00:39:46.529 "data_size": 63488 00:39:46.529 }, 00:39:46.529 { 00:39:46.529 "name": "BaseBdev3", 00:39:46.529 "uuid": "a8fa33b4-398f-4e7a-8c48-ab7297c05070", 00:39:46.529 "is_configured": true, 00:39:46.529 "data_offset": 2048, 00:39:46.529 "data_size": 63488 00:39:46.529 } 00:39:46.529 ] 00:39:46.529 } 00:39:46.529 } 00:39:46.529 }' 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:39:46.529 BaseBdev2 00:39:46.529 BaseBdev3' 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.529 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:46.788 [2024-12-09 23:21:27.290603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:46.788 [2024-12-09 23:21:27.290654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:46.788 [2024-12-09 23:21:27.290771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:46.788 [2024-12-09 23:21:27.291113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:46.788 [2024-12-09 23:21:27.291133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80395 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80395 ']' 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80395 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80395 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:46.788 killing process with pid 80395 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80395' 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80395 00:39:46.788 [2024-12-09 23:21:27.328685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:46.788 23:21:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80395 00:39:47.047 [2024-12-09 23:21:27.663558] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:48.426 23:21:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:39:48.426 00:39:48.426 real 0m10.686s 00:39:48.426 user 0m16.725s 00:39:48.426 sys 0m2.171s 00:39:48.426 23:21:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:48.426 23:21:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:48.426 ************************************ 00:39:48.426 END TEST raid5f_state_function_test_sb 00:39:48.426 ************************************ 00:39:48.426 23:21:28 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:39:48.426 23:21:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:48.426 23:21:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:48.426 23:21:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:48.426 ************************************ 00:39:48.426 START TEST raid5f_superblock_test 00:39:48.426 ************************************ 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81017 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81017 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81017 ']' 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.426 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:48.685 [2024-12-09 23:21:29.114263] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:48.685 [2024-12-09 23:21:29.114457] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81017 ] 00:39:48.685 [2024-12-09 23:21:29.301411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.944 [2024-12-09 23:21:29.450290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.204 [2024-12-09 23:21:29.701798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:49.204 [2024-12-09 23:21:29.702196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:49.463 malloc1 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.463 23:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:49.463 [2024-12-09 23:21:30.004819] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:49.463 [2024-12-09 23:21:30.005131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:49.463 [2024-12-09 23:21:30.005200] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:49.463 [2024-12-09 23:21:30.005301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:49.463 [2024-12-09 23:21:30.008280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:49.463 [2024-12-09 23:21:30.008461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:49.463 pt1 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:49.463 malloc2 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:49.463 [2024-12-09 23:21:30.068560] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:49.463 [2024-12-09 23:21:30.068834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:49.463 [2024-12-09 23:21:30.068900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:49.463 [2024-12-09 23:21:30.068971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:49.463 [2024-12-09 23:21:30.071767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:49.463 [2024-12-09 23:21:30.071926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:49.463 pt2 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.463 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:49.722 malloc3 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:49.722 [2024-12-09 23:21:30.143130] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:49.722 [2024-12-09 23:21:30.143420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:49.722 [2024-12-09 23:21:30.143492] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:49.722 [2024-12-09 23:21:30.143575] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:49.722 [2024-12-09 23:21:30.146464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:49.722 [2024-12-09 23:21:30.146608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:49.722 pt3 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:49.722 [2024-12-09 23:21:30.155302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:49.722 [2024-12-09 23:21:30.157713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:49.722 [2024-12-09 23:21:30.157919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:49.722 [2024-12-09 23:21:30.158112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:49.722 [2024-12-09 23:21:30.158136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:39:49.722 [2024-12-09 23:21:30.158436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:49.722 [2024-12-09 23:21:30.164446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:49.722 [2024-12-09 23:21:30.164562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:49.722 [2024-12-09 23:21:30.164863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:49.722 "name": "raid_bdev1", 00:39:49.722 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:49.722 "strip_size_kb": 64, 00:39:49.722 "state": "online", 00:39:49.722 "raid_level": "raid5f", 00:39:49.722 "superblock": true, 00:39:49.722 "num_base_bdevs": 3, 00:39:49.722 "num_base_bdevs_discovered": 3, 00:39:49.722 "num_base_bdevs_operational": 3, 00:39:49.722 "base_bdevs_list": [ 00:39:49.722 { 00:39:49.722 "name": "pt1", 00:39:49.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:49.722 "is_configured": true, 00:39:49.722 "data_offset": 2048, 00:39:49.722 "data_size": 63488 00:39:49.722 }, 00:39:49.722 { 00:39:49.722 "name": "pt2", 00:39:49.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:49.722 "is_configured": true, 00:39:49.722 "data_offset": 2048, 00:39:49.722 "data_size": 63488 00:39:49.722 }, 00:39:49.722 { 00:39:49.722 "name": "pt3", 00:39:49.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:49.722 "is_configured": true, 00:39:49.722 "data_offset": 2048, 00:39:49.722 "data_size": 63488 00:39:49.722 } 00:39:49.722 ] 00:39:49.722 }' 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:49.722 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:49.981 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:39:49.981 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:39:49.981 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:49.981 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:49.981 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:39:49.981 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:49.981 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:49.981 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:49.981 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.981 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:49.981 [2024-12-09 23:21:30.599917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:50.240 "name": "raid_bdev1", 00:39:50.240 "aliases": [ 00:39:50.240 "a33ca39a-46c1-4b14-b87c-a8038fb04441" 00:39:50.240 ], 00:39:50.240 "product_name": "Raid Volume", 00:39:50.240 "block_size": 512, 00:39:50.240 "num_blocks": 126976, 00:39:50.240 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:50.240 "assigned_rate_limits": { 00:39:50.240 "rw_ios_per_sec": 0, 00:39:50.240 "rw_mbytes_per_sec": 0, 00:39:50.240 "r_mbytes_per_sec": 0, 00:39:50.240 "w_mbytes_per_sec": 0 00:39:50.240 }, 00:39:50.240 "claimed": false, 00:39:50.240 "zoned": false, 00:39:50.240 "supported_io_types": { 00:39:50.240 "read": true, 00:39:50.240 "write": true, 00:39:50.240 "unmap": false, 00:39:50.240 "flush": false, 00:39:50.240 "reset": true, 00:39:50.240 "nvme_admin": false, 00:39:50.240 "nvme_io": false, 00:39:50.240 "nvme_io_md": false, 00:39:50.240 "write_zeroes": true, 00:39:50.240 "zcopy": false, 00:39:50.240 "get_zone_info": false, 00:39:50.240 "zone_management": false, 00:39:50.240 "zone_append": false, 00:39:50.240 "compare": false, 00:39:50.240 "compare_and_write": false, 00:39:50.240 "abort": false, 00:39:50.240 "seek_hole": false, 00:39:50.240 "seek_data": false, 00:39:50.240 "copy": false, 00:39:50.240 "nvme_iov_md": false 00:39:50.240 }, 00:39:50.240 "driver_specific": { 00:39:50.240 "raid": { 00:39:50.240 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:50.240 "strip_size_kb": 64, 00:39:50.240 "state": "online", 00:39:50.240 "raid_level": "raid5f", 00:39:50.240 "superblock": true, 00:39:50.240 "num_base_bdevs": 3, 00:39:50.240 "num_base_bdevs_discovered": 3, 00:39:50.240 "num_base_bdevs_operational": 3, 00:39:50.240 "base_bdevs_list": [ 00:39:50.240 { 00:39:50.240 "name": "pt1", 00:39:50.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:50.240 "is_configured": true, 00:39:50.240 "data_offset": 2048, 00:39:50.240 "data_size": 63488 00:39:50.240 }, 00:39:50.240 { 00:39:50.240 "name": "pt2", 00:39:50.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:50.240 "is_configured": true, 00:39:50.240 "data_offset": 2048, 00:39:50.240 "data_size": 63488 00:39:50.240 }, 00:39:50.240 { 00:39:50.240 "name": "pt3", 00:39:50.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:50.240 "is_configured": true, 00:39:50.240 "data_offset": 2048, 00:39:50.240 "data_size": 63488 00:39:50.240 } 00:39:50.240 ] 00:39:50.240 } 00:39:50.240 } 00:39:50.240 }' 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:39:50.240 pt2 00:39:50.240 pt3' 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:39:50.240 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:50.241 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.241 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.241 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.241 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:50.241 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:50.241 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:50.241 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.241 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.241 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:39:50.241 [2024-12-09 23:21:30.867743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a33ca39a-46c1-4b14-b87c-a8038fb04441 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a33ca39a-46c1-4b14-b87c-a8038fb04441 ']' 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.500 [2024-12-09 23:21:30.911553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:50.500 [2024-12-09 23:21:30.911596] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:50.500 [2024-12-09 23:21:30.911700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:50.500 [2024-12-09 23:21:30.911791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:50.500 [2024-12-09 23:21:30.911805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.500 23:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:39:50.500 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.500 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.500 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:39:50.500 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.500 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.501 [2024-12-09 23:21:31.063610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:39:50.501 [2024-12-09 23:21:31.066387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:39:50.501 [2024-12-09 23:21:31.066596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:39:50.501 [2024-12-09 23:21:31.066697] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:39:50.501 [2024-12-09 23:21:31.066981] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:39:50.501 [2024-12-09 23:21:31.067125] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:39:50.501 [2024-12-09 23:21:31.067195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:50.501 [2024-12-09 23:21:31.067267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:39:50.501 request: 00:39:50.501 { 00:39:50.501 "name": "raid_bdev1", 00:39:50.501 "raid_level": "raid5f", 00:39:50.501 "base_bdevs": [ 00:39:50.501 "malloc1", 00:39:50.501 "malloc2", 00:39:50.501 "malloc3" 00:39:50.501 ], 00:39:50.501 "strip_size_kb": 64, 00:39:50.501 "superblock": false, 00:39:50.501 "method": "bdev_raid_create", 00:39:50.501 "req_id": 1 00:39:50.501 } 00:39:50.501 Got JSON-RPC error response 00:39:50.501 response: 00:39:50.501 { 00:39:50.501 "code": -17, 00:39:50.501 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:39:50.501 } 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.501 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.760 [2024-12-09 23:21:31.135528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:50.760 [2024-12-09 23:21:31.135582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:50.760 [2024-12-09 23:21:31.135608] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:50.760 [2024-12-09 23:21:31.135620] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:50.760 [2024-12-09 23:21:31.138474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:50.760 [2024-12-09 23:21:31.138513] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:50.760 [2024-12-09 23:21:31.138591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:50.760 [2024-12-09 23:21:31.138652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:50.760 pt1 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.760 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:50.760 "name": "raid_bdev1", 00:39:50.760 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:50.760 "strip_size_kb": 64, 00:39:50.760 "state": "configuring", 00:39:50.760 "raid_level": "raid5f", 00:39:50.760 "superblock": true, 00:39:50.760 "num_base_bdevs": 3, 00:39:50.760 "num_base_bdevs_discovered": 1, 00:39:50.760 "num_base_bdevs_operational": 3, 00:39:50.760 "base_bdevs_list": [ 00:39:50.760 { 00:39:50.760 "name": "pt1", 00:39:50.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:50.760 "is_configured": true, 00:39:50.760 "data_offset": 2048, 00:39:50.760 "data_size": 63488 00:39:50.760 }, 00:39:50.760 { 00:39:50.760 "name": null, 00:39:50.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:50.760 "is_configured": false, 00:39:50.760 "data_offset": 2048, 00:39:50.760 "data_size": 63488 00:39:50.760 }, 00:39:50.760 { 00:39:50.760 "name": null, 00:39:50.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:50.760 "is_configured": false, 00:39:50.761 "data_offset": 2048, 00:39:50.761 "data_size": 63488 00:39:50.761 } 00:39:50.761 ] 00:39:50.761 }' 00:39:50.761 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:50.761 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.020 [2024-12-09 23:21:31.539134] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:51.020 [2024-12-09 23:21:31.539465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:51.020 [2024-12-09 23:21:31.539539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:39:51.020 [2024-12-09 23:21:31.539628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:51.020 [2024-12-09 23:21:31.540324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:51.020 [2024-12-09 23:21:31.540490] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:51.020 [2024-12-09 23:21:31.540699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:51.020 [2024-12-09 23:21:31.540855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:51.020 pt2 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.020 [2024-12-09 23:21:31.551069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.020 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:51.020 "name": "raid_bdev1", 00:39:51.020 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:51.020 "strip_size_kb": 64, 00:39:51.020 "state": "configuring", 00:39:51.020 "raid_level": "raid5f", 00:39:51.020 "superblock": true, 00:39:51.020 "num_base_bdevs": 3, 00:39:51.020 "num_base_bdevs_discovered": 1, 00:39:51.020 "num_base_bdevs_operational": 3, 00:39:51.020 "base_bdevs_list": [ 00:39:51.020 { 00:39:51.020 "name": "pt1", 00:39:51.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:51.021 "is_configured": true, 00:39:51.021 "data_offset": 2048, 00:39:51.021 "data_size": 63488 00:39:51.021 }, 00:39:51.021 { 00:39:51.021 "name": null, 00:39:51.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:51.021 "is_configured": false, 00:39:51.021 "data_offset": 0, 00:39:51.021 "data_size": 63488 00:39:51.021 }, 00:39:51.021 { 00:39:51.021 "name": null, 00:39:51.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:51.021 "is_configured": false, 00:39:51.021 "data_offset": 2048, 00:39:51.021 "data_size": 63488 00:39:51.021 } 00:39:51.021 ] 00:39:51.021 }' 00:39:51.021 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:51.021 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.587 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:39:51.587 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:51.587 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.588 [2024-12-09 23:21:31.978590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:51.588 [2024-12-09 23:21:31.978703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:51.588 [2024-12-09 23:21:31.978730] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:39:51.588 [2024-12-09 23:21:31.978746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:51.588 [2024-12-09 23:21:31.979354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:51.588 [2024-12-09 23:21:31.979388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:51.588 [2024-12-09 23:21:31.979524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:51.588 [2024-12-09 23:21:31.979561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:51.588 pt2 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.588 [2024-12-09 23:21:31.986538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:51.588 [2024-12-09 23:21:31.986597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:51.588 [2024-12-09 23:21:31.986615] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:51.588 [2024-12-09 23:21:31.986630] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:51.588 [2024-12-09 23:21:31.987067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:51.588 [2024-12-09 23:21:31.987093] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:51.588 [2024-12-09 23:21:31.987167] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:39:51.588 [2024-12-09 23:21:31.987191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:51.588 [2024-12-09 23:21:31.987328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:39:51.588 [2024-12-09 23:21:31.987342] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:39:51.588 [2024-12-09 23:21:31.987633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:51.588 [2024-12-09 23:21:31.992847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:39:51.588 [2024-12-09 23:21:31.992871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:39:51.588 [2024-12-09 23:21:31.993073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:51.588 pt3 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.588 23:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.588 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.588 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:51.588 "name": "raid_bdev1", 00:39:51.588 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:51.588 "strip_size_kb": 64, 00:39:51.588 "state": "online", 00:39:51.588 "raid_level": "raid5f", 00:39:51.588 "superblock": true, 00:39:51.588 "num_base_bdevs": 3, 00:39:51.588 "num_base_bdevs_discovered": 3, 00:39:51.588 "num_base_bdevs_operational": 3, 00:39:51.588 "base_bdevs_list": [ 00:39:51.588 { 00:39:51.588 "name": "pt1", 00:39:51.588 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:51.588 "is_configured": true, 00:39:51.588 "data_offset": 2048, 00:39:51.588 "data_size": 63488 00:39:51.588 }, 00:39:51.588 { 00:39:51.588 "name": "pt2", 00:39:51.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:51.588 "is_configured": true, 00:39:51.588 "data_offset": 2048, 00:39:51.588 "data_size": 63488 00:39:51.588 }, 00:39:51.588 { 00:39:51.588 "name": "pt3", 00:39:51.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:51.588 "is_configured": true, 00:39:51.588 "data_offset": 2048, 00:39:51.588 "data_size": 63488 00:39:51.588 } 00:39:51.588 ] 00:39:51.588 }' 00:39:51.588 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:51.588 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:51.845 [2024-12-09 23:21:32.415945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.845 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:51.845 "name": "raid_bdev1", 00:39:51.845 "aliases": [ 00:39:51.845 "a33ca39a-46c1-4b14-b87c-a8038fb04441" 00:39:51.845 ], 00:39:51.845 "product_name": "Raid Volume", 00:39:51.845 "block_size": 512, 00:39:51.845 "num_blocks": 126976, 00:39:51.845 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:51.845 "assigned_rate_limits": { 00:39:51.845 "rw_ios_per_sec": 0, 00:39:51.845 "rw_mbytes_per_sec": 0, 00:39:51.845 "r_mbytes_per_sec": 0, 00:39:51.845 "w_mbytes_per_sec": 0 00:39:51.845 }, 00:39:51.845 "claimed": false, 00:39:51.845 "zoned": false, 00:39:51.845 "supported_io_types": { 00:39:51.845 "read": true, 00:39:51.845 "write": true, 00:39:51.845 "unmap": false, 00:39:51.845 "flush": false, 00:39:51.845 "reset": true, 00:39:51.845 "nvme_admin": false, 00:39:51.845 "nvme_io": false, 00:39:51.846 "nvme_io_md": false, 00:39:51.846 "write_zeroes": true, 00:39:51.846 "zcopy": false, 00:39:51.846 "get_zone_info": false, 00:39:51.846 "zone_management": false, 00:39:51.846 "zone_append": false, 00:39:51.846 "compare": false, 00:39:51.846 "compare_and_write": false, 00:39:51.846 "abort": false, 00:39:51.846 "seek_hole": false, 00:39:51.846 "seek_data": false, 00:39:51.846 "copy": false, 00:39:51.846 "nvme_iov_md": false 00:39:51.846 }, 00:39:51.846 "driver_specific": { 00:39:51.846 "raid": { 00:39:51.846 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:51.846 "strip_size_kb": 64, 00:39:51.846 "state": "online", 00:39:51.846 "raid_level": "raid5f", 00:39:51.846 "superblock": true, 00:39:51.846 "num_base_bdevs": 3, 00:39:51.846 "num_base_bdevs_discovered": 3, 00:39:51.846 "num_base_bdevs_operational": 3, 00:39:51.846 "base_bdevs_list": [ 00:39:51.846 { 00:39:51.846 "name": "pt1", 00:39:51.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:51.846 "is_configured": true, 00:39:51.846 "data_offset": 2048, 00:39:51.846 "data_size": 63488 00:39:51.846 }, 00:39:51.846 { 00:39:51.846 "name": "pt2", 00:39:51.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:51.846 "is_configured": true, 00:39:51.846 "data_offset": 2048, 00:39:51.846 "data_size": 63488 00:39:51.846 }, 00:39:51.846 { 00:39:51.846 "name": "pt3", 00:39:51.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:51.846 "is_configured": true, 00:39:51.846 "data_offset": 2048, 00:39:51.846 "data_size": 63488 00:39:51.846 } 00:39:51.846 ] 00:39:51.846 } 00:39:51.846 } 00:39:51.846 }' 00:39:51.846 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:52.103 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:39:52.103 pt2 00:39:52.103 pt3' 00:39:52.103 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:52.103 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:39:52.103 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:52.103 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:39:52.104 [2024-12-09 23:21:32.695789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a33ca39a-46c1-4b14-b87c-a8038fb04441 '!=' a33ca39a-46c1-4b14-b87c-a8038fb04441 ']' 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.104 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.362 [2024-12-09 23:21:32.739642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:52.362 "name": "raid_bdev1", 00:39:52.362 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:52.362 "strip_size_kb": 64, 00:39:52.362 "state": "online", 00:39:52.362 "raid_level": "raid5f", 00:39:52.362 "superblock": true, 00:39:52.362 "num_base_bdevs": 3, 00:39:52.362 "num_base_bdevs_discovered": 2, 00:39:52.362 "num_base_bdevs_operational": 2, 00:39:52.362 "base_bdevs_list": [ 00:39:52.362 { 00:39:52.362 "name": null, 00:39:52.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:52.362 "is_configured": false, 00:39:52.362 "data_offset": 0, 00:39:52.362 "data_size": 63488 00:39:52.362 }, 00:39:52.362 { 00:39:52.362 "name": "pt2", 00:39:52.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:52.362 "is_configured": true, 00:39:52.362 "data_offset": 2048, 00:39:52.362 "data_size": 63488 00:39:52.362 }, 00:39:52.362 { 00:39:52.362 "name": "pt3", 00:39:52.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:52.362 "is_configured": true, 00:39:52.362 "data_offset": 2048, 00:39:52.362 "data_size": 63488 00:39:52.362 } 00:39:52.362 ] 00:39:52.362 }' 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:52.362 23:21:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.620 [2024-12-09 23:21:33.191313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:52.620 [2024-12-09 23:21:33.191568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:52.620 [2024-12-09 23:21:33.191705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:52.620 [2024-12-09 23:21:33.191779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:52.620 [2024-12-09 23:21:33.191800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.620 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.879 [2024-12-09 23:21:33.267104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:52.879 [2024-12-09 23:21:33.267171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:52.879 [2024-12-09 23:21:33.267193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:39:52.879 [2024-12-09 23:21:33.267209] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:52.879 [2024-12-09 23:21:33.270072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:52.879 [2024-12-09 23:21:33.270261] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:52.879 [2024-12-09 23:21:33.270368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:52.879 [2024-12-09 23:21:33.270459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:52.879 pt2 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:52.879 "name": "raid_bdev1", 00:39:52.879 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:52.879 "strip_size_kb": 64, 00:39:52.879 "state": "configuring", 00:39:52.879 "raid_level": "raid5f", 00:39:52.879 "superblock": true, 00:39:52.879 "num_base_bdevs": 3, 00:39:52.879 "num_base_bdevs_discovered": 1, 00:39:52.879 "num_base_bdevs_operational": 2, 00:39:52.879 "base_bdevs_list": [ 00:39:52.879 { 00:39:52.879 "name": null, 00:39:52.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:52.879 "is_configured": false, 00:39:52.879 "data_offset": 2048, 00:39:52.879 "data_size": 63488 00:39:52.879 }, 00:39:52.879 { 00:39:52.879 "name": "pt2", 00:39:52.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:52.879 "is_configured": true, 00:39:52.879 "data_offset": 2048, 00:39:52.879 "data_size": 63488 00:39:52.879 }, 00:39:52.879 { 00:39:52.879 "name": null, 00:39:52.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:52.879 "is_configured": false, 00:39:52.879 "data_offset": 2048, 00:39:52.879 "data_size": 63488 00:39:52.879 } 00:39:52.879 ] 00:39:52.879 }' 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:52.879 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.214 [2024-12-09 23:21:33.670605] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:53.214 [2024-12-09 23:21:33.670725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:53.214 [2024-12-09 23:21:33.670755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:39:53.214 [2024-12-09 23:21:33.670772] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:53.214 [2024-12-09 23:21:33.671430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:53.214 [2024-12-09 23:21:33.671464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:53.214 [2024-12-09 23:21:33.671579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:39:53.214 [2024-12-09 23:21:33.671619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:53.214 [2024-12-09 23:21:33.671770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:39:53.214 [2024-12-09 23:21:33.671785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:39:53.214 [2024-12-09 23:21:33.672106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:53.214 [2024-12-09 23:21:33.677790] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:39:53.214 [2024-12-09 23:21:33.677817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:39:53.214 [2024-12-09 23:21:33.678223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:53.214 pt3 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:53.214 "name": "raid_bdev1", 00:39:53.214 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:53.214 "strip_size_kb": 64, 00:39:53.214 "state": "online", 00:39:53.214 "raid_level": "raid5f", 00:39:53.214 "superblock": true, 00:39:53.214 "num_base_bdevs": 3, 00:39:53.214 "num_base_bdevs_discovered": 2, 00:39:53.214 "num_base_bdevs_operational": 2, 00:39:53.214 "base_bdevs_list": [ 00:39:53.214 { 00:39:53.214 "name": null, 00:39:53.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:53.214 "is_configured": false, 00:39:53.214 "data_offset": 2048, 00:39:53.214 "data_size": 63488 00:39:53.214 }, 00:39:53.214 { 00:39:53.214 "name": "pt2", 00:39:53.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:53.214 "is_configured": true, 00:39:53.214 "data_offset": 2048, 00:39:53.214 "data_size": 63488 00:39:53.214 }, 00:39:53.214 { 00:39:53.214 "name": "pt3", 00:39:53.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:53.214 "is_configured": true, 00:39:53.214 "data_offset": 2048, 00:39:53.214 "data_size": 63488 00:39:53.214 } 00:39:53.214 ] 00:39:53.214 }' 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:53.214 23:21:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.471 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:53.471 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.471 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.471 [2024-12-09 23:21:34.089566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:53.471 [2024-12-09 23:21:34.089805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:53.471 [2024-12-09 23:21:34.090082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:53.471 [2024-12-09 23:21:34.090268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:53.471 [2024-12-09 23:21:34.090389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:39:53.471 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.471 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:39:53.471 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:53.471 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.471 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.728 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.728 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:39:53.728 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:39:53.728 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.729 [2024-12-09 23:21:34.157578] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:53.729 [2024-12-09 23:21:34.157674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:53.729 [2024-12-09 23:21:34.157703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:39:53.729 [2024-12-09 23:21:34.157717] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:53.729 [2024-12-09 23:21:34.160827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:53.729 [2024-12-09 23:21:34.161054] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:53.729 [2024-12-09 23:21:34.161198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:53.729 [2024-12-09 23:21:34.161263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:53.729 [2024-12-09 23:21:34.161465] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:39:53.729 [2024-12-09 23:21:34.161482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:53.729 [2024-12-09 23:21:34.161505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:39:53.729 [2024-12-09 23:21:34.161581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:53.729 pt1 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:53.729 "name": "raid_bdev1", 00:39:53.729 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:53.729 "strip_size_kb": 64, 00:39:53.729 "state": "configuring", 00:39:53.729 "raid_level": "raid5f", 00:39:53.729 "superblock": true, 00:39:53.729 "num_base_bdevs": 3, 00:39:53.729 "num_base_bdevs_discovered": 1, 00:39:53.729 "num_base_bdevs_operational": 2, 00:39:53.729 "base_bdevs_list": [ 00:39:53.729 { 00:39:53.729 "name": null, 00:39:53.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:53.729 "is_configured": false, 00:39:53.729 "data_offset": 2048, 00:39:53.729 "data_size": 63488 00:39:53.729 }, 00:39:53.729 { 00:39:53.729 "name": "pt2", 00:39:53.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:53.729 "is_configured": true, 00:39:53.729 "data_offset": 2048, 00:39:53.729 "data_size": 63488 00:39:53.729 }, 00:39:53.729 { 00:39:53.729 "name": null, 00:39:53.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:53.729 "is_configured": false, 00:39:53.729 "data_offset": 2048, 00:39:53.729 "data_size": 63488 00:39:53.729 } 00:39:53.729 ] 00:39:53.729 }' 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:53.729 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:53.986 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:39:53.986 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:39:53.986 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.986 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.244 [2024-12-09 23:21:34.665575] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:54.244 [2024-12-09 23:21:34.665675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:54.244 [2024-12-09 23:21:34.665705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:39:54.244 [2024-12-09 23:21:34.665720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:54.244 [2024-12-09 23:21:34.666340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:54.244 [2024-12-09 23:21:34.666371] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:54.244 [2024-12-09 23:21:34.666516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:39:54.244 [2024-12-09 23:21:34.666549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:54.244 [2024-12-09 23:21:34.666703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:39:54.244 [2024-12-09 23:21:34.666714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:39:54.244 [2024-12-09 23:21:34.667032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:39:54.244 [2024-12-09 23:21:34.672386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:39:54.244 [2024-12-09 23:21:34.672430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:39:54.244 [2024-12-09 23:21:34.672731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:54.244 pt3 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:54.244 "name": "raid_bdev1", 00:39:54.244 "uuid": "a33ca39a-46c1-4b14-b87c-a8038fb04441", 00:39:54.244 "strip_size_kb": 64, 00:39:54.244 "state": "online", 00:39:54.244 "raid_level": "raid5f", 00:39:54.244 "superblock": true, 00:39:54.244 "num_base_bdevs": 3, 00:39:54.244 "num_base_bdevs_discovered": 2, 00:39:54.244 "num_base_bdevs_operational": 2, 00:39:54.244 "base_bdevs_list": [ 00:39:54.244 { 00:39:54.244 "name": null, 00:39:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:54.244 "is_configured": false, 00:39:54.244 "data_offset": 2048, 00:39:54.244 "data_size": 63488 00:39:54.244 }, 00:39:54.244 { 00:39:54.244 "name": "pt2", 00:39:54.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:54.244 "is_configured": true, 00:39:54.244 "data_offset": 2048, 00:39:54.244 "data_size": 63488 00:39:54.244 }, 00:39:54.244 { 00:39:54.244 "name": "pt3", 00:39:54.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:39:54.244 "is_configured": true, 00:39:54.244 "data_offset": 2048, 00:39:54.244 "data_size": 63488 00:39:54.244 } 00:39:54.244 ] 00:39:54.244 }' 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:54.244 23:21:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.503 23:21:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:39:54.503 23:21:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:39:54.503 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.503 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.503 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:54.760 [2024-12-09 23:21:35.159824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a33ca39a-46c1-4b14-b87c-a8038fb04441 '!=' a33ca39a-46c1-4b14-b87c-a8038fb04441 ']' 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81017 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81017 ']' 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81017 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81017 00:39:54.760 killing process with pid 81017 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81017' 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81017 00:39:54.760 [2024-12-09 23:21:35.233829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:54.760 23:21:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81017 00:39:54.760 [2024-12-09 23:21:35.233952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:54.760 [2024-12-09 23:21:35.234028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:54.760 [2024-12-09 23:21:35.234046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:39:55.020 [2024-12-09 23:21:35.570853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:56.390 23:21:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:39:56.390 ************************************ 00:39:56.390 END TEST raid5f_superblock_test 00:39:56.390 ************************************ 00:39:56.390 00:39:56.390 real 0m7.824s 00:39:56.390 user 0m11.938s 00:39:56.390 sys 0m1.700s 00:39:56.390 23:21:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:56.390 23:21:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.390 23:21:36 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:39:56.390 23:21:36 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:39:56.390 23:21:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:39:56.390 23:21:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:56.390 23:21:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:56.390 ************************************ 00:39:56.390 START TEST raid5f_rebuild_test 00:39:56.390 ************************************ 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81455 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81455 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81455 ']' 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:56.390 23:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:56.390 [2024-12-09 23:21:37.024419] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:39:56.648 [2024-12-09 23:21:37.024693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:39:56.648 Zero copy mechanism will not be used. 00:39:56.648 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81455 ] 00:39:56.648 [2024-12-09 23:21:37.200262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:56.906 [2024-12-09 23:21:37.341540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.163 [2024-12-09 23:21:37.583014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:57.163 [2024-12-09 23:21:37.583421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:57.421 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:57.421 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:39:57.421 23:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:57.421 23:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:39:57.421 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.421 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.421 BaseBdev1_malloc 00:39:57.421 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.421 23:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.422 [2024-12-09 23:21:37.911435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:57.422 [2024-12-09 23:21:37.911519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:57.422 [2024-12-09 23:21:37.911550] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:57.422 [2024-12-09 23:21:37.911566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:57.422 [2024-12-09 23:21:37.914353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:57.422 [2024-12-09 23:21:37.914430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:57.422 BaseBdev1 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.422 BaseBdev2_malloc 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.422 [2024-12-09 23:21:37.974258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:57.422 [2024-12-09 23:21:37.974328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:57.422 [2024-12-09 23:21:37.974354] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:57.422 [2024-12-09 23:21:37.974370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:57.422 [2024-12-09 23:21:37.977064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:57.422 [2024-12-09 23:21:37.977329] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:57.422 BaseBdev2 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.422 23:21:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.422 BaseBdev3_malloc 00:39:57.422 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.422 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:39:57.422 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.422 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.422 [2024-12-09 23:21:38.048454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:39:57.422 [2024-12-09 23:21:38.048698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:57.422 [2024-12-09 23:21:38.048734] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:57.422 [2024-12-09 23:21:38.048750] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:57.422 [2024-12-09 23:21:38.051460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:57.422 [2024-12-09 23:21:38.051500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:39:57.422 BaseBdev3 00:39:57.422 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.422 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:39:57.422 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.422 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.680 spare_malloc 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.680 spare_delay 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.680 [2024-12-09 23:21:38.121704] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:57.680 [2024-12-09 23:21:38.121782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:57.680 [2024-12-09 23:21:38.121810] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:39:57.680 [2024-12-09 23:21:38.121826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:57.680 [2024-12-09 23:21:38.124664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:57.680 [2024-12-09 23:21:38.124713] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:57.680 spare 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:39:57.680 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.681 [2024-12-09 23:21:38.133759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:57.681 [2024-12-09 23:21:38.136442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:57.681 [2024-12-09 23:21:38.136644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:57.681 [2024-12-09 23:21:38.136790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:57.681 [2024-12-09 23:21:38.136951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:39:57.681 [2024-12-09 23:21:38.137301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:57.681 [2024-12-09 23:21:38.143880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:57.681 [2024-12-09 23:21:38.144017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:57.681 [2024-12-09 23:21:38.144364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:57.681 "name": "raid_bdev1", 00:39:57.681 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:39:57.681 "strip_size_kb": 64, 00:39:57.681 "state": "online", 00:39:57.681 "raid_level": "raid5f", 00:39:57.681 "superblock": false, 00:39:57.681 "num_base_bdevs": 3, 00:39:57.681 "num_base_bdevs_discovered": 3, 00:39:57.681 "num_base_bdevs_operational": 3, 00:39:57.681 "base_bdevs_list": [ 00:39:57.681 { 00:39:57.681 "name": "BaseBdev1", 00:39:57.681 "uuid": "e2e99121-2e43-557b-ac7e-7b9caeeea040", 00:39:57.681 "is_configured": true, 00:39:57.681 "data_offset": 0, 00:39:57.681 "data_size": 65536 00:39:57.681 }, 00:39:57.681 { 00:39:57.681 "name": "BaseBdev2", 00:39:57.681 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:39:57.681 "is_configured": true, 00:39:57.681 "data_offset": 0, 00:39:57.681 "data_size": 65536 00:39:57.681 }, 00:39:57.681 { 00:39:57.681 "name": "BaseBdev3", 00:39:57.681 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:39:57.681 "is_configured": true, 00:39:57.681 "data_offset": 0, 00:39:57.681 "data_size": 65536 00:39:57.681 } 00:39:57.681 ] 00:39:57.681 }' 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:57.681 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:39:57.939 [2024-12-09 23:21:38.515889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:57.939 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:39:58.198 [2024-12-09 23:21:38.767717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:39:58.198 /dev/nbd0 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:58.198 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:58.484 1+0 records in 00:39:58.484 1+0 records out 00:39:58.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433035 s, 9.5 MB/s 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:39:58.484 23:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:39:58.741 512+0 records in 00:39:58.741 512+0 records out 00:39:58.741 67108864 bytes (67 MB, 64 MiB) copied, 0.466687 s, 144 MB/s 00:39:58.741 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:39:58.741 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:58.741 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:58.741 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:58.741 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:39:58.741 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:58.741 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:59.000 [2024-12-09 23:21:39.554639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:59.000 [2024-12-09 23:21:39.570688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:59.000 "name": "raid_bdev1", 00:39:59.000 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:39:59.000 "strip_size_kb": 64, 00:39:59.000 "state": "online", 00:39:59.000 "raid_level": "raid5f", 00:39:59.000 "superblock": false, 00:39:59.000 "num_base_bdevs": 3, 00:39:59.000 "num_base_bdevs_discovered": 2, 00:39:59.000 "num_base_bdevs_operational": 2, 00:39:59.000 "base_bdevs_list": [ 00:39:59.000 { 00:39:59.000 "name": null, 00:39:59.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:59.000 "is_configured": false, 00:39:59.000 "data_offset": 0, 00:39:59.000 "data_size": 65536 00:39:59.000 }, 00:39:59.000 { 00:39:59.000 "name": "BaseBdev2", 00:39:59.000 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:39:59.000 "is_configured": true, 00:39:59.000 "data_offset": 0, 00:39:59.000 "data_size": 65536 00:39:59.000 }, 00:39:59.000 { 00:39:59.000 "name": "BaseBdev3", 00:39:59.000 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:39:59.000 "is_configured": true, 00:39:59.000 "data_offset": 0, 00:39:59.000 "data_size": 65536 00:39:59.000 } 00:39:59.000 ] 00:39:59.000 }' 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:59.000 23:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:59.568 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:59.568 23:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.568 23:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:59.568 [2024-12-09 23:21:39.982633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:59.568 [2024-12-09 23:21:39.999724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:39:59.568 23:21:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.568 23:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:39:59.568 [2024-12-09 23:21:40.008552] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:00.502 "name": "raid_bdev1", 00:40:00.502 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:00.502 "strip_size_kb": 64, 00:40:00.502 "state": "online", 00:40:00.502 "raid_level": "raid5f", 00:40:00.502 "superblock": false, 00:40:00.502 "num_base_bdevs": 3, 00:40:00.502 "num_base_bdevs_discovered": 3, 00:40:00.502 "num_base_bdevs_operational": 3, 00:40:00.502 "process": { 00:40:00.502 "type": "rebuild", 00:40:00.502 "target": "spare", 00:40:00.502 "progress": { 00:40:00.502 "blocks": 18432, 00:40:00.502 "percent": 14 00:40:00.502 } 00:40:00.502 }, 00:40:00.502 "base_bdevs_list": [ 00:40:00.502 { 00:40:00.502 "name": "spare", 00:40:00.502 "uuid": "be748f86-82f6-5a49-9bb5-b598bad73f27", 00:40:00.502 "is_configured": true, 00:40:00.502 "data_offset": 0, 00:40:00.502 "data_size": 65536 00:40:00.502 }, 00:40:00.502 { 00:40:00.502 "name": "BaseBdev2", 00:40:00.502 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:00.502 "is_configured": true, 00:40:00.502 "data_offset": 0, 00:40:00.502 "data_size": 65536 00:40:00.502 }, 00:40:00.502 { 00:40:00.502 "name": "BaseBdev3", 00:40:00.502 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:00.502 "is_configured": true, 00:40:00.502 "data_offset": 0, 00:40:00.502 "data_size": 65536 00:40:00.502 } 00:40:00.502 ] 00:40:00.502 }' 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:00.502 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:00.760 [2024-12-09 23:21:41.150783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:00.760 [2024-12-09 23:21:41.223003] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:00.760 [2024-12-09 23:21:41.223570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:00.760 [2024-12-09 23:21:41.223606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:00.760 [2024-12-09 23:21:41.223620] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:00.760 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.761 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:00.761 "name": "raid_bdev1", 00:40:00.761 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:00.761 "strip_size_kb": 64, 00:40:00.761 "state": "online", 00:40:00.761 "raid_level": "raid5f", 00:40:00.761 "superblock": false, 00:40:00.761 "num_base_bdevs": 3, 00:40:00.761 "num_base_bdevs_discovered": 2, 00:40:00.761 "num_base_bdevs_operational": 2, 00:40:00.761 "base_bdevs_list": [ 00:40:00.761 { 00:40:00.761 "name": null, 00:40:00.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:00.761 "is_configured": false, 00:40:00.761 "data_offset": 0, 00:40:00.761 "data_size": 65536 00:40:00.761 }, 00:40:00.761 { 00:40:00.761 "name": "BaseBdev2", 00:40:00.761 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:00.761 "is_configured": true, 00:40:00.761 "data_offset": 0, 00:40:00.761 "data_size": 65536 00:40:00.761 }, 00:40:00.761 { 00:40:00.761 "name": "BaseBdev3", 00:40:00.761 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:00.761 "is_configured": true, 00:40:00.761 "data_offset": 0, 00:40:00.761 "data_size": 65536 00:40:00.761 } 00:40:00.761 ] 00:40:00.761 }' 00:40:00.761 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:00.761 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.326 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:01.326 "name": "raid_bdev1", 00:40:01.326 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:01.326 "strip_size_kb": 64, 00:40:01.326 "state": "online", 00:40:01.326 "raid_level": "raid5f", 00:40:01.326 "superblock": false, 00:40:01.326 "num_base_bdevs": 3, 00:40:01.326 "num_base_bdevs_discovered": 2, 00:40:01.326 "num_base_bdevs_operational": 2, 00:40:01.326 "base_bdevs_list": [ 00:40:01.326 { 00:40:01.326 "name": null, 00:40:01.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:01.326 "is_configured": false, 00:40:01.326 "data_offset": 0, 00:40:01.326 "data_size": 65536 00:40:01.326 }, 00:40:01.326 { 00:40:01.326 "name": "BaseBdev2", 00:40:01.326 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:01.326 "is_configured": true, 00:40:01.326 "data_offset": 0, 00:40:01.326 "data_size": 65536 00:40:01.326 }, 00:40:01.326 { 00:40:01.327 "name": "BaseBdev3", 00:40:01.327 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:01.327 "is_configured": true, 00:40:01.327 "data_offset": 0, 00:40:01.327 "data_size": 65536 00:40:01.327 } 00:40:01.327 ] 00:40:01.327 }' 00:40:01.327 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:01.327 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:01.327 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:01.327 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:01.327 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:01.327 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.327 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:01.327 [2024-12-09 23:21:41.832186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:01.327 [2024-12-09 23:21:41.849126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:40:01.327 23:21:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.327 23:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:40:01.327 [2024-12-09 23:21:41.856889] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:02.260 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:02.260 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:02.260 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:02.260 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:02.260 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:02.260 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:02.260 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:02.260 23:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.260 23:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:02.260 23:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:02.518 "name": "raid_bdev1", 00:40:02.518 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:02.518 "strip_size_kb": 64, 00:40:02.518 "state": "online", 00:40:02.518 "raid_level": "raid5f", 00:40:02.518 "superblock": false, 00:40:02.518 "num_base_bdevs": 3, 00:40:02.518 "num_base_bdevs_discovered": 3, 00:40:02.518 "num_base_bdevs_operational": 3, 00:40:02.518 "process": { 00:40:02.518 "type": "rebuild", 00:40:02.518 "target": "spare", 00:40:02.518 "progress": { 00:40:02.518 "blocks": 20480, 00:40:02.518 "percent": 15 00:40:02.518 } 00:40:02.518 }, 00:40:02.518 "base_bdevs_list": [ 00:40:02.518 { 00:40:02.518 "name": "spare", 00:40:02.518 "uuid": "be748f86-82f6-5a49-9bb5-b598bad73f27", 00:40:02.518 "is_configured": true, 00:40:02.518 "data_offset": 0, 00:40:02.518 "data_size": 65536 00:40:02.518 }, 00:40:02.518 { 00:40:02.518 "name": "BaseBdev2", 00:40:02.518 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:02.518 "is_configured": true, 00:40:02.518 "data_offset": 0, 00:40:02.518 "data_size": 65536 00:40:02.518 }, 00:40:02.518 { 00:40:02.518 "name": "BaseBdev3", 00:40:02.518 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:02.518 "is_configured": true, 00:40:02.518 "data_offset": 0, 00:40:02.518 "data_size": 65536 00:40:02.518 } 00:40:02.518 ] 00:40:02.518 }' 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=552 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:02.518 23:21:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.518 23:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:02.518 "name": "raid_bdev1", 00:40:02.518 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:02.518 "strip_size_kb": 64, 00:40:02.518 "state": "online", 00:40:02.518 "raid_level": "raid5f", 00:40:02.518 "superblock": false, 00:40:02.518 "num_base_bdevs": 3, 00:40:02.518 "num_base_bdevs_discovered": 3, 00:40:02.518 "num_base_bdevs_operational": 3, 00:40:02.518 "process": { 00:40:02.518 "type": "rebuild", 00:40:02.518 "target": "spare", 00:40:02.518 "progress": { 00:40:02.518 "blocks": 22528, 00:40:02.518 "percent": 17 00:40:02.518 } 00:40:02.518 }, 00:40:02.518 "base_bdevs_list": [ 00:40:02.518 { 00:40:02.518 "name": "spare", 00:40:02.518 "uuid": "be748f86-82f6-5a49-9bb5-b598bad73f27", 00:40:02.518 "is_configured": true, 00:40:02.518 "data_offset": 0, 00:40:02.518 "data_size": 65536 00:40:02.518 }, 00:40:02.518 { 00:40:02.518 "name": "BaseBdev2", 00:40:02.518 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:02.518 "is_configured": true, 00:40:02.518 "data_offset": 0, 00:40:02.518 "data_size": 65536 00:40:02.518 }, 00:40:02.518 { 00:40:02.518 "name": "BaseBdev3", 00:40:02.518 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:02.518 "is_configured": true, 00:40:02.518 "data_offset": 0, 00:40:02.518 "data_size": 65536 00:40:02.518 } 00:40:02.518 ] 00:40:02.518 }' 00:40:02.518 23:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:02.518 23:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:02.518 23:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:02.518 23:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:02.518 23:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.894 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:03.894 "name": "raid_bdev1", 00:40:03.894 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:03.894 "strip_size_kb": 64, 00:40:03.894 "state": "online", 00:40:03.894 "raid_level": "raid5f", 00:40:03.894 "superblock": false, 00:40:03.894 "num_base_bdevs": 3, 00:40:03.894 "num_base_bdevs_discovered": 3, 00:40:03.894 "num_base_bdevs_operational": 3, 00:40:03.894 "process": { 00:40:03.894 "type": "rebuild", 00:40:03.894 "target": "spare", 00:40:03.894 "progress": { 00:40:03.894 "blocks": 45056, 00:40:03.894 "percent": 34 00:40:03.894 } 00:40:03.894 }, 00:40:03.894 "base_bdevs_list": [ 00:40:03.894 { 00:40:03.894 "name": "spare", 00:40:03.894 "uuid": "be748f86-82f6-5a49-9bb5-b598bad73f27", 00:40:03.894 "is_configured": true, 00:40:03.894 "data_offset": 0, 00:40:03.894 "data_size": 65536 00:40:03.894 }, 00:40:03.894 { 00:40:03.894 "name": "BaseBdev2", 00:40:03.894 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:03.894 "is_configured": true, 00:40:03.894 "data_offset": 0, 00:40:03.894 "data_size": 65536 00:40:03.894 }, 00:40:03.894 { 00:40:03.895 "name": "BaseBdev3", 00:40:03.895 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:03.895 "is_configured": true, 00:40:03.895 "data_offset": 0, 00:40:03.895 "data_size": 65536 00:40:03.895 } 00:40:03.895 ] 00:40:03.895 }' 00:40:03.895 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:03.895 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:03.895 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:03.895 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:03.895 23:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.827 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:04.827 "name": "raid_bdev1", 00:40:04.827 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:04.827 "strip_size_kb": 64, 00:40:04.827 "state": "online", 00:40:04.828 "raid_level": "raid5f", 00:40:04.828 "superblock": false, 00:40:04.828 "num_base_bdevs": 3, 00:40:04.828 "num_base_bdevs_discovered": 3, 00:40:04.828 "num_base_bdevs_operational": 3, 00:40:04.828 "process": { 00:40:04.828 "type": "rebuild", 00:40:04.828 "target": "spare", 00:40:04.828 "progress": { 00:40:04.828 "blocks": 67584, 00:40:04.828 "percent": 51 00:40:04.828 } 00:40:04.828 }, 00:40:04.828 "base_bdevs_list": [ 00:40:04.828 { 00:40:04.828 "name": "spare", 00:40:04.828 "uuid": "be748f86-82f6-5a49-9bb5-b598bad73f27", 00:40:04.828 "is_configured": true, 00:40:04.828 "data_offset": 0, 00:40:04.828 "data_size": 65536 00:40:04.828 }, 00:40:04.828 { 00:40:04.828 "name": "BaseBdev2", 00:40:04.828 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:04.828 "is_configured": true, 00:40:04.828 "data_offset": 0, 00:40:04.828 "data_size": 65536 00:40:04.828 }, 00:40:04.828 { 00:40:04.828 "name": "BaseBdev3", 00:40:04.828 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:04.828 "is_configured": true, 00:40:04.828 "data_offset": 0, 00:40:04.828 "data_size": 65536 00:40:04.828 } 00:40:04.828 ] 00:40:04.828 }' 00:40:04.828 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:04.828 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:04.828 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:04.828 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:04.828 23:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:05.760 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:05.760 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:05.760 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:05.760 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:05.760 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:05.760 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:05.760 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:05.760 23:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.760 23:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:05.761 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:06.018 23:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.018 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:06.018 "name": "raid_bdev1", 00:40:06.018 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:06.018 "strip_size_kb": 64, 00:40:06.018 "state": "online", 00:40:06.018 "raid_level": "raid5f", 00:40:06.018 "superblock": false, 00:40:06.018 "num_base_bdevs": 3, 00:40:06.018 "num_base_bdevs_discovered": 3, 00:40:06.018 "num_base_bdevs_operational": 3, 00:40:06.018 "process": { 00:40:06.018 "type": "rebuild", 00:40:06.018 "target": "spare", 00:40:06.018 "progress": { 00:40:06.018 "blocks": 90112, 00:40:06.018 "percent": 68 00:40:06.018 } 00:40:06.018 }, 00:40:06.018 "base_bdevs_list": [ 00:40:06.018 { 00:40:06.018 "name": "spare", 00:40:06.018 "uuid": "be748f86-82f6-5a49-9bb5-b598bad73f27", 00:40:06.018 "is_configured": true, 00:40:06.018 "data_offset": 0, 00:40:06.018 "data_size": 65536 00:40:06.018 }, 00:40:06.018 { 00:40:06.018 "name": "BaseBdev2", 00:40:06.018 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:06.018 "is_configured": true, 00:40:06.018 "data_offset": 0, 00:40:06.018 "data_size": 65536 00:40:06.018 }, 00:40:06.018 { 00:40:06.018 "name": "BaseBdev3", 00:40:06.019 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:06.019 "is_configured": true, 00:40:06.019 "data_offset": 0, 00:40:06.019 "data_size": 65536 00:40:06.019 } 00:40:06.019 ] 00:40:06.019 }' 00:40:06.019 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:06.019 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:06.019 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:06.019 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:06.019 23:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:06.952 "name": "raid_bdev1", 00:40:06.952 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:06.952 "strip_size_kb": 64, 00:40:06.952 "state": "online", 00:40:06.952 "raid_level": "raid5f", 00:40:06.952 "superblock": false, 00:40:06.952 "num_base_bdevs": 3, 00:40:06.952 "num_base_bdevs_discovered": 3, 00:40:06.952 "num_base_bdevs_operational": 3, 00:40:06.952 "process": { 00:40:06.952 "type": "rebuild", 00:40:06.952 "target": "spare", 00:40:06.952 "progress": { 00:40:06.952 "blocks": 114688, 00:40:06.952 "percent": 87 00:40:06.952 } 00:40:06.952 }, 00:40:06.952 "base_bdevs_list": [ 00:40:06.952 { 00:40:06.952 "name": "spare", 00:40:06.952 "uuid": "be748f86-82f6-5a49-9bb5-b598bad73f27", 00:40:06.952 "is_configured": true, 00:40:06.952 "data_offset": 0, 00:40:06.952 "data_size": 65536 00:40:06.952 }, 00:40:06.952 { 00:40:06.952 "name": "BaseBdev2", 00:40:06.952 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:06.952 "is_configured": true, 00:40:06.952 "data_offset": 0, 00:40:06.952 "data_size": 65536 00:40:06.952 }, 00:40:06.952 { 00:40:06.952 "name": "BaseBdev3", 00:40:06.952 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:06.952 "is_configured": true, 00:40:06.952 "data_offset": 0, 00:40:06.952 "data_size": 65536 00:40:06.952 } 00:40:06.952 ] 00:40:06.952 }' 00:40:06.952 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:07.210 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:07.210 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:07.210 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:07.210 23:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:07.775 [2024-12-09 23:21:48.313342] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:07.775 [2024-12-09 23:21:48.313464] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:07.775 [2024-12-09 23:21:48.313533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:08.349 "name": "raid_bdev1", 00:40:08.349 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:08.349 "strip_size_kb": 64, 00:40:08.349 "state": "online", 00:40:08.349 "raid_level": "raid5f", 00:40:08.349 "superblock": false, 00:40:08.349 "num_base_bdevs": 3, 00:40:08.349 "num_base_bdevs_discovered": 3, 00:40:08.349 "num_base_bdevs_operational": 3, 00:40:08.349 "base_bdevs_list": [ 00:40:08.349 { 00:40:08.349 "name": "spare", 00:40:08.349 "uuid": "be748f86-82f6-5a49-9bb5-b598bad73f27", 00:40:08.349 "is_configured": true, 00:40:08.349 "data_offset": 0, 00:40:08.349 "data_size": 65536 00:40:08.349 }, 00:40:08.349 { 00:40:08.349 "name": "BaseBdev2", 00:40:08.349 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:08.349 "is_configured": true, 00:40:08.349 "data_offset": 0, 00:40:08.349 "data_size": 65536 00:40:08.349 }, 00:40:08.349 { 00:40:08.349 "name": "BaseBdev3", 00:40:08.349 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:08.349 "is_configured": true, 00:40:08.349 "data_offset": 0, 00:40:08.349 "data_size": 65536 00:40:08.349 } 00:40:08.349 ] 00:40:08.349 }' 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.349 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:08.349 "name": "raid_bdev1", 00:40:08.349 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:08.349 "strip_size_kb": 64, 00:40:08.349 "state": "online", 00:40:08.349 "raid_level": "raid5f", 00:40:08.349 "superblock": false, 00:40:08.350 "num_base_bdevs": 3, 00:40:08.350 "num_base_bdevs_discovered": 3, 00:40:08.350 "num_base_bdevs_operational": 3, 00:40:08.350 "base_bdevs_list": [ 00:40:08.350 { 00:40:08.350 "name": "spare", 00:40:08.350 "uuid": "be748f86-82f6-5a49-9bb5-b598bad73f27", 00:40:08.350 "is_configured": true, 00:40:08.350 "data_offset": 0, 00:40:08.350 "data_size": 65536 00:40:08.350 }, 00:40:08.350 { 00:40:08.350 "name": "BaseBdev2", 00:40:08.350 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:08.350 "is_configured": true, 00:40:08.350 "data_offset": 0, 00:40:08.350 "data_size": 65536 00:40:08.350 }, 00:40:08.350 { 00:40:08.350 "name": "BaseBdev3", 00:40:08.350 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:08.350 "is_configured": true, 00:40:08.350 "data_offset": 0, 00:40:08.350 "data_size": 65536 00:40:08.350 } 00:40:08.350 ] 00:40:08.350 }' 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:08.350 23:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.607 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:08.607 "name": "raid_bdev1", 00:40:08.607 "uuid": "eb81e510-f4d9-4653-bfc2-ad70d88c8c06", 00:40:08.607 "strip_size_kb": 64, 00:40:08.607 "state": "online", 00:40:08.607 "raid_level": "raid5f", 00:40:08.607 "superblock": false, 00:40:08.607 "num_base_bdevs": 3, 00:40:08.607 "num_base_bdevs_discovered": 3, 00:40:08.607 "num_base_bdevs_operational": 3, 00:40:08.607 "base_bdevs_list": [ 00:40:08.607 { 00:40:08.607 "name": "spare", 00:40:08.607 "uuid": "be748f86-82f6-5a49-9bb5-b598bad73f27", 00:40:08.607 "is_configured": true, 00:40:08.607 "data_offset": 0, 00:40:08.607 "data_size": 65536 00:40:08.607 }, 00:40:08.607 { 00:40:08.607 "name": "BaseBdev2", 00:40:08.607 "uuid": "f74fed58-7277-5b5e-b3a5-41ee2e6c2ccd", 00:40:08.607 "is_configured": true, 00:40:08.607 "data_offset": 0, 00:40:08.607 "data_size": 65536 00:40:08.607 }, 00:40:08.607 { 00:40:08.607 "name": "BaseBdev3", 00:40:08.607 "uuid": "2143f57c-97f7-5ed9-92dc-ea5036eee143", 00:40:08.607 "is_configured": true, 00:40:08.607 "data_offset": 0, 00:40:08.607 "data_size": 65536 00:40:08.607 } 00:40:08.607 ] 00:40:08.607 }' 00:40:08.607 23:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:08.607 23:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:08.865 [2024-12-09 23:21:49.392960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:08.865 [2024-12-09 23:21:49.392996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:08.865 [2024-12-09 23:21:49.393097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:08.865 [2024-12-09 23:21:49.393192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:08.865 [2024-12-09 23:21:49.393213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:08.865 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:40:09.123 /dev/nbd0 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:09.123 1+0 records in 00:40:09.123 1+0 records out 00:40:09.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528203 s, 7.8 MB/s 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:09.123 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:40:09.381 /dev/nbd1 00:40:09.381 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:09.382 1+0 records in 00:40:09.382 1+0 records out 00:40:09.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421978 s, 9.7 MB/s 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:09.382 23:21:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:40:09.640 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:40:09.640 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:40:09.640 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:09.640 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:09.640 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:40:09.640 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:09.640 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:40:09.897 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:09.898 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:09.898 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:09.898 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:09.898 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:09.898 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:09.898 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:40:09.898 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:40:09.898 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:09.898 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81455 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81455 ']' 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81455 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81455 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:10.156 killing process with pid 81455 00:40:10.156 Received shutdown signal, test time was about 60.000000 seconds 00:40:10.156 00:40:10.156 Latency(us) 00:40:10.156 [2024-12-09T23:21:50.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:10.156 [2024-12-09T23:21:50.792Z] =================================================================================================================== 00:40:10.156 [2024-12-09T23:21:50.792Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81455' 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81455 00:40:10.156 [2024-12-09 23:21:50.686387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:10.156 23:21:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81455 00:40:10.721 [2024-12-09 23:21:51.119781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:40:12.097 00:40:12.097 real 0m15.427s 00:40:12.097 user 0m18.505s 00:40:12.097 sys 0m2.406s 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:12.097 ************************************ 00:40:12.097 END TEST raid5f_rebuild_test 00:40:12.097 ************************************ 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.097 23:21:52 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:40:12.097 23:21:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:40:12.097 23:21:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:12.097 23:21:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:12.097 ************************************ 00:40:12.097 START TEST raid5f_rebuild_test_sb 00:40:12.097 ************************************ 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81896 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81896 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81896 ']' 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:12.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:12.097 23:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:12.097 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:12.097 Zero copy mechanism will not be used. 00:40:12.097 [2024-12-09 23:21:52.541659] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:40:12.097 [2024-12-09 23:21:52.541815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81896 ] 00:40:12.097 [2024-12-09 23:21:52.723986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:12.365 [2024-12-09 23:21:52.850623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.632 [2024-12-09 23:21:53.076599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:12.632 [2024-12-09 23:21:53.076670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:12.890 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:12.890 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:40:12.890 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:12.890 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:40:12.890 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.890 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:12.890 BaseBdev1_malloc 00:40:12.890 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:12.891 [2024-12-09 23:21:53.464158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:12.891 [2024-12-09 23:21:53.464226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:12.891 [2024-12-09 23:21:53.464250] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:40:12.891 [2024-12-09 23:21:53.464265] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:12.891 [2024-12-09 23:21:53.466828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:12.891 [2024-12-09 23:21:53.466997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:12.891 BaseBdev1 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:12.891 BaseBdev2_malloc 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.891 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.150 [2024-12-09 23:21:53.528794] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:13.150 [2024-12-09 23:21:53.528995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:13.150 [2024-12-09 23:21:53.529055] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:40:13.150 [2024-12-09 23:21:53.529302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:13.150 [2024-12-09 23:21:53.532029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:13.150 [2024-12-09 23:21:53.532114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:13.150 BaseBdev2 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.150 BaseBdev3_malloc 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.150 [2024-12-09 23:21:53.607838] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:40:13.150 [2024-12-09 23:21:53.608010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:13.150 [2024-12-09 23:21:53.608045] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:40:13.150 [2024-12-09 23:21:53.608061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:13.150 [2024-12-09 23:21:53.610600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:13.150 [2024-12-09 23:21:53.610645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:40:13.150 BaseBdev3 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.150 spare_malloc 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.150 spare_delay 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.150 [2024-12-09 23:21:53.680126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:13.150 [2024-12-09 23:21:53.680192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:13.150 [2024-12-09 23:21:53.680218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:40:13.150 [2024-12-09 23:21:53.680234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:13.150 [2024-12-09 23:21:53.682714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:13.150 [2024-12-09 23:21:53.682908] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:13.150 spare 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.150 [2024-12-09 23:21:53.692186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:13.150 [2024-12-09 23:21:53.694377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:13.150 [2024-12-09 23:21:53.694609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:13.150 [2024-12-09 23:21:53.694806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:40:13.150 [2024-12-09 23:21:53.694821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:13.150 [2024-12-09 23:21:53.695090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:40:13.150 [2024-12-09 23:21:53.701839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:40:13.150 [2024-12-09 23:21:53.701972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:40:13.150 [2024-12-09 23:21:53.702278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.150 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:13.150 "name": "raid_bdev1", 00:40:13.150 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:13.150 "strip_size_kb": 64, 00:40:13.150 "state": "online", 00:40:13.150 "raid_level": "raid5f", 00:40:13.150 "superblock": true, 00:40:13.150 "num_base_bdevs": 3, 00:40:13.150 "num_base_bdevs_discovered": 3, 00:40:13.150 "num_base_bdevs_operational": 3, 00:40:13.150 "base_bdevs_list": [ 00:40:13.150 { 00:40:13.150 "name": "BaseBdev1", 00:40:13.150 "uuid": "53f4dfc0-3162-51a5-ba38-55fcfd7029f6", 00:40:13.150 "is_configured": true, 00:40:13.151 "data_offset": 2048, 00:40:13.151 "data_size": 63488 00:40:13.151 }, 00:40:13.151 { 00:40:13.151 "name": "BaseBdev2", 00:40:13.151 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:13.151 "is_configured": true, 00:40:13.151 "data_offset": 2048, 00:40:13.151 "data_size": 63488 00:40:13.151 }, 00:40:13.151 { 00:40:13.151 "name": "BaseBdev3", 00:40:13.151 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:13.151 "is_configured": true, 00:40:13.151 "data_offset": 2048, 00:40:13.151 "data_size": 63488 00:40:13.151 } 00:40:13.151 ] 00:40:13.151 }' 00:40:13.151 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:13.151 23:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.717 [2024-12-09 23:21:54.184727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:13.717 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:40:13.975 [2024-12-09 23:21:54.528079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:40:13.975 /dev/nbd0 00:40:13.975 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:13.976 1+0 records in 00:40:13.976 1+0 records out 00:40:13.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392098 s, 10.4 MB/s 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:40:13.976 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:14.233 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:14.233 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:40:14.233 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:14.233 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:14.233 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:40:14.233 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:40:14.233 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:40:14.233 23:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:40:14.492 496+0 records in 00:40:14.492 496+0 records out 00:40:14.492 65011712 bytes (65 MB, 62 MiB) copied, 0.406435 s, 160 MB/s 00:40:14.492 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:40:14.492 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:40:14.492 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:14.492 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:14.492 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:40:14.492 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:14.492 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:14.750 [2024-12-09 23:21:55.256881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:14.750 [2024-12-09 23:21:55.283991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.750 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:14.750 "name": "raid_bdev1", 00:40:14.750 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:14.750 "strip_size_kb": 64, 00:40:14.750 "state": "online", 00:40:14.750 "raid_level": "raid5f", 00:40:14.750 "superblock": true, 00:40:14.750 "num_base_bdevs": 3, 00:40:14.750 "num_base_bdevs_discovered": 2, 00:40:14.750 "num_base_bdevs_operational": 2, 00:40:14.750 "base_bdevs_list": [ 00:40:14.750 { 00:40:14.750 "name": null, 00:40:14.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:14.751 "is_configured": false, 00:40:14.751 "data_offset": 0, 00:40:14.751 "data_size": 63488 00:40:14.751 }, 00:40:14.751 { 00:40:14.751 "name": "BaseBdev2", 00:40:14.751 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:14.751 "is_configured": true, 00:40:14.751 "data_offset": 2048, 00:40:14.751 "data_size": 63488 00:40:14.751 }, 00:40:14.751 { 00:40:14.751 "name": "BaseBdev3", 00:40:14.751 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:14.751 "is_configured": true, 00:40:14.751 "data_offset": 2048, 00:40:14.751 "data_size": 63488 00:40:14.751 } 00:40:14.751 ] 00:40:14.751 }' 00:40:14.751 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:14.751 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:15.319 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:15.319 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.319 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:15.319 [2024-12-09 23:21:55.719464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:15.319 [2024-12-09 23:21:55.737883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:40:15.319 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.319 23:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:40:15.319 [2024-12-09 23:21:55.746488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:16.254 "name": "raid_bdev1", 00:40:16.254 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:16.254 "strip_size_kb": 64, 00:40:16.254 "state": "online", 00:40:16.254 "raid_level": "raid5f", 00:40:16.254 "superblock": true, 00:40:16.254 "num_base_bdevs": 3, 00:40:16.254 "num_base_bdevs_discovered": 3, 00:40:16.254 "num_base_bdevs_operational": 3, 00:40:16.254 "process": { 00:40:16.254 "type": "rebuild", 00:40:16.254 "target": "spare", 00:40:16.254 "progress": { 00:40:16.254 "blocks": 18432, 00:40:16.254 "percent": 14 00:40:16.254 } 00:40:16.254 }, 00:40:16.254 "base_bdevs_list": [ 00:40:16.254 { 00:40:16.254 "name": "spare", 00:40:16.254 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:16.254 "is_configured": true, 00:40:16.254 "data_offset": 2048, 00:40:16.254 "data_size": 63488 00:40:16.254 }, 00:40:16.254 { 00:40:16.254 "name": "BaseBdev2", 00:40:16.254 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:16.254 "is_configured": true, 00:40:16.254 "data_offset": 2048, 00:40:16.254 "data_size": 63488 00:40:16.254 }, 00:40:16.254 { 00:40:16.254 "name": "BaseBdev3", 00:40:16.254 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:16.254 "is_configured": true, 00:40:16.254 "data_offset": 2048, 00:40:16.254 "data_size": 63488 00:40:16.254 } 00:40:16.254 ] 00:40:16.254 }' 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.254 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:16.254 [2024-12-09 23:21:56.882584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:16.512 [2024-12-09 23:21:56.956638] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:16.512 [2024-12-09 23:21:56.956709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:16.512 [2024-12-09 23:21:56.956732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:16.512 [2024-12-09 23:21:56.956742] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:16.512 23:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:16.512 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:16.512 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:16.512 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.512 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:16.513 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.513 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:16.513 "name": "raid_bdev1", 00:40:16.513 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:16.513 "strip_size_kb": 64, 00:40:16.513 "state": "online", 00:40:16.513 "raid_level": "raid5f", 00:40:16.513 "superblock": true, 00:40:16.513 "num_base_bdevs": 3, 00:40:16.513 "num_base_bdevs_discovered": 2, 00:40:16.513 "num_base_bdevs_operational": 2, 00:40:16.513 "base_bdevs_list": [ 00:40:16.513 { 00:40:16.513 "name": null, 00:40:16.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:16.513 "is_configured": false, 00:40:16.513 "data_offset": 0, 00:40:16.513 "data_size": 63488 00:40:16.513 }, 00:40:16.513 { 00:40:16.513 "name": "BaseBdev2", 00:40:16.513 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:16.513 "is_configured": true, 00:40:16.513 "data_offset": 2048, 00:40:16.513 "data_size": 63488 00:40:16.513 }, 00:40:16.513 { 00:40:16.513 "name": "BaseBdev3", 00:40:16.513 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:16.513 "is_configured": true, 00:40:16.513 "data_offset": 2048, 00:40:16.513 "data_size": 63488 00:40:16.513 } 00:40:16.513 ] 00:40:16.513 }' 00:40:16.513 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:16.513 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:17.159 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:17.159 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:17.159 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:17.159 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:17.159 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:17.159 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:17.159 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.159 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:17.160 "name": "raid_bdev1", 00:40:17.160 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:17.160 "strip_size_kb": 64, 00:40:17.160 "state": "online", 00:40:17.160 "raid_level": "raid5f", 00:40:17.160 "superblock": true, 00:40:17.160 "num_base_bdevs": 3, 00:40:17.160 "num_base_bdevs_discovered": 2, 00:40:17.160 "num_base_bdevs_operational": 2, 00:40:17.160 "base_bdevs_list": [ 00:40:17.160 { 00:40:17.160 "name": null, 00:40:17.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:17.160 "is_configured": false, 00:40:17.160 "data_offset": 0, 00:40:17.160 "data_size": 63488 00:40:17.160 }, 00:40:17.160 { 00:40:17.160 "name": "BaseBdev2", 00:40:17.160 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:17.160 "is_configured": true, 00:40:17.160 "data_offset": 2048, 00:40:17.160 "data_size": 63488 00:40:17.160 }, 00:40:17.160 { 00:40:17.160 "name": "BaseBdev3", 00:40:17.160 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:17.160 "is_configured": true, 00:40:17.160 "data_offset": 2048, 00:40:17.160 "data_size": 63488 00:40:17.160 } 00:40:17.160 ] 00:40:17.160 }' 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:17.160 [2024-12-09 23:21:57.576654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:17.160 [2024-12-09 23:21:57.593139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.160 23:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:40:17.160 [2024-12-09 23:21:57.601284] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:18.095 "name": "raid_bdev1", 00:40:18.095 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:18.095 "strip_size_kb": 64, 00:40:18.095 "state": "online", 00:40:18.095 "raid_level": "raid5f", 00:40:18.095 "superblock": true, 00:40:18.095 "num_base_bdevs": 3, 00:40:18.095 "num_base_bdevs_discovered": 3, 00:40:18.095 "num_base_bdevs_operational": 3, 00:40:18.095 "process": { 00:40:18.095 "type": "rebuild", 00:40:18.095 "target": "spare", 00:40:18.095 "progress": { 00:40:18.095 "blocks": 20480, 00:40:18.095 "percent": 16 00:40:18.095 } 00:40:18.095 }, 00:40:18.095 "base_bdevs_list": [ 00:40:18.095 { 00:40:18.095 "name": "spare", 00:40:18.095 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:18.095 "is_configured": true, 00:40:18.095 "data_offset": 2048, 00:40:18.095 "data_size": 63488 00:40:18.095 }, 00:40:18.095 { 00:40:18.095 "name": "BaseBdev2", 00:40:18.095 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:18.095 "is_configured": true, 00:40:18.095 "data_offset": 2048, 00:40:18.095 "data_size": 63488 00:40:18.095 }, 00:40:18.095 { 00:40:18.095 "name": "BaseBdev3", 00:40:18.095 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:18.095 "is_configured": true, 00:40:18.095 "data_offset": 2048, 00:40:18.095 "data_size": 63488 00:40:18.095 } 00:40:18.095 ] 00:40:18.095 }' 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:18.095 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:40:18.353 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=568 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:18.353 "name": "raid_bdev1", 00:40:18.353 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:18.353 "strip_size_kb": 64, 00:40:18.353 "state": "online", 00:40:18.353 "raid_level": "raid5f", 00:40:18.353 "superblock": true, 00:40:18.353 "num_base_bdevs": 3, 00:40:18.353 "num_base_bdevs_discovered": 3, 00:40:18.353 "num_base_bdevs_operational": 3, 00:40:18.353 "process": { 00:40:18.353 "type": "rebuild", 00:40:18.353 "target": "spare", 00:40:18.353 "progress": { 00:40:18.353 "blocks": 22528, 00:40:18.353 "percent": 17 00:40:18.353 } 00:40:18.353 }, 00:40:18.353 "base_bdevs_list": [ 00:40:18.353 { 00:40:18.353 "name": "spare", 00:40:18.353 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:18.353 "is_configured": true, 00:40:18.353 "data_offset": 2048, 00:40:18.353 "data_size": 63488 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "name": "BaseBdev2", 00:40:18.353 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:18.353 "is_configured": true, 00:40:18.353 "data_offset": 2048, 00:40:18.353 "data_size": 63488 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "name": "BaseBdev3", 00:40:18.353 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:18.353 "is_configured": true, 00:40:18.353 "data_offset": 2048, 00:40:18.353 "data_size": 63488 00:40:18.353 } 00:40:18.353 ] 00:40:18.353 }' 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:18.353 23:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:19.289 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.548 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:19.548 "name": "raid_bdev1", 00:40:19.548 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:19.548 "strip_size_kb": 64, 00:40:19.548 "state": "online", 00:40:19.548 "raid_level": "raid5f", 00:40:19.548 "superblock": true, 00:40:19.548 "num_base_bdevs": 3, 00:40:19.548 "num_base_bdevs_discovered": 3, 00:40:19.548 "num_base_bdevs_operational": 3, 00:40:19.548 "process": { 00:40:19.548 "type": "rebuild", 00:40:19.548 "target": "spare", 00:40:19.548 "progress": { 00:40:19.548 "blocks": 45056, 00:40:19.548 "percent": 35 00:40:19.548 } 00:40:19.548 }, 00:40:19.548 "base_bdevs_list": [ 00:40:19.548 { 00:40:19.548 "name": "spare", 00:40:19.548 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:19.548 "is_configured": true, 00:40:19.548 "data_offset": 2048, 00:40:19.548 "data_size": 63488 00:40:19.548 }, 00:40:19.548 { 00:40:19.548 "name": "BaseBdev2", 00:40:19.548 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:19.548 "is_configured": true, 00:40:19.548 "data_offset": 2048, 00:40:19.548 "data_size": 63488 00:40:19.548 }, 00:40:19.548 { 00:40:19.548 "name": "BaseBdev3", 00:40:19.548 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:19.548 "is_configured": true, 00:40:19.548 "data_offset": 2048, 00:40:19.548 "data_size": 63488 00:40:19.548 } 00:40:19.548 ] 00:40:19.548 }' 00:40:19.548 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:19.548 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:19.548 23:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:19.548 23:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:19.548 23:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:20.484 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:20.484 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:20.485 "name": "raid_bdev1", 00:40:20.485 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:20.485 "strip_size_kb": 64, 00:40:20.485 "state": "online", 00:40:20.485 "raid_level": "raid5f", 00:40:20.485 "superblock": true, 00:40:20.485 "num_base_bdevs": 3, 00:40:20.485 "num_base_bdevs_discovered": 3, 00:40:20.485 "num_base_bdevs_operational": 3, 00:40:20.485 "process": { 00:40:20.485 "type": "rebuild", 00:40:20.485 "target": "spare", 00:40:20.485 "progress": { 00:40:20.485 "blocks": 69632, 00:40:20.485 "percent": 54 00:40:20.485 } 00:40:20.485 }, 00:40:20.485 "base_bdevs_list": [ 00:40:20.485 { 00:40:20.485 "name": "spare", 00:40:20.485 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:20.485 "is_configured": true, 00:40:20.485 "data_offset": 2048, 00:40:20.485 "data_size": 63488 00:40:20.485 }, 00:40:20.485 { 00:40:20.485 "name": "BaseBdev2", 00:40:20.485 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:20.485 "is_configured": true, 00:40:20.485 "data_offset": 2048, 00:40:20.485 "data_size": 63488 00:40:20.485 }, 00:40:20.485 { 00:40:20.485 "name": "BaseBdev3", 00:40:20.485 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:20.485 "is_configured": true, 00:40:20.485 "data_offset": 2048, 00:40:20.485 "data_size": 63488 00:40:20.485 } 00:40:20.485 ] 00:40:20.485 }' 00:40:20.485 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:20.743 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:20.743 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:20.743 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:20.743 23:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:21.708 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:21.708 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:21.708 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:21.708 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:21.708 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:21.708 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:21.709 "name": "raid_bdev1", 00:40:21.709 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:21.709 "strip_size_kb": 64, 00:40:21.709 "state": "online", 00:40:21.709 "raid_level": "raid5f", 00:40:21.709 "superblock": true, 00:40:21.709 "num_base_bdevs": 3, 00:40:21.709 "num_base_bdevs_discovered": 3, 00:40:21.709 "num_base_bdevs_operational": 3, 00:40:21.709 "process": { 00:40:21.709 "type": "rebuild", 00:40:21.709 "target": "spare", 00:40:21.709 "progress": { 00:40:21.709 "blocks": 92160, 00:40:21.709 "percent": 72 00:40:21.709 } 00:40:21.709 }, 00:40:21.709 "base_bdevs_list": [ 00:40:21.709 { 00:40:21.709 "name": "spare", 00:40:21.709 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:21.709 "is_configured": true, 00:40:21.709 "data_offset": 2048, 00:40:21.709 "data_size": 63488 00:40:21.709 }, 00:40:21.709 { 00:40:21.709 "name": "BaseBdev2", 00:40:21.709 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:21.709 "is_configured": true, 00:40:21.709 "data_offset": 2048, 00:40:21.709 "data_size": 63488 00:40:21.709 }, 00:40:21.709 { 00:40:21.709 "name": "BaseBdev3", 00:40:21.709 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:21.709 "is_configured": true, 00:40:21.709 "data_offset": 2048, 00:40:21.709 "data_size": 63488 00:40:21.709 } 00:40:21.709 ] 00:40:21.709 }' 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:21.709 23:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:23.108 "name": "raid_bdev1", 00:40:23.108 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:23.108 "strip_size_kb": 64, 00:40:23.108 "state": "online", 00:40:23.108 "raid_level": "raid5f", 00:40:23.108 "superblock": true, 00:40:23.108 "num_base_bdevs": 3, 00:40:23.108 "num_base_bdevs_discovered": 3, 00:40:23.108 "num_base_bdevs_operational": 3, 00:40:23.108 "process": { 00:40:23.108 "type": "rebuild", 00:40:23.108 "target": "spare", 00:40:23.108 "progress": { 00:40:23.108 "blocks": 114688, 00:40:23.108 "percent": 90 00:40:23.108 } 00:40:23.108 }, 00:40:23.108 "base_bdevs_list": [ 00:40:23.108 { 00:40:23.108 "name": "spare", 00:40:23.108 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:23.108 "is_configured": true, 00:40:23.108 "data_offset": 2048, 00:40:23.108 "data_size": 63488 00:40:23.108 }, 00:40:23.108 { 00:40:23.108 "name": "BaseBdev2", 00:40:23.108 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:23.108 "is_configured": true, 00:40:23.108 "data_offset": 2048, 00:40:23.108 "data_size": 63488 00:40:23.108 }, 00:40:23.108 { 00:40:23.108 "name": "BaseBdev3", 00:40:23.108 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:23.108 "is_configured": true, 00:40:23.108 "data_offset": 2048, 00:40:23.108 "data_size": 63488 00:40:23.108 } 00:40:23.108 ] 00:40:23.108 }' 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:23.108 23:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:23.367 [2024-12-09 23:22:03.853381] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:23.367 [2024-12-09 23:22:03.853466] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:23.367 [2024-12-09 23:22:03.853577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:23.935 "name": "raid_bdev1", 00:40:23.935 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:23.935 "strip_size_kb": 64, 00:40:23.935 "state": "online", 00:40:23.935 "raid_level": "raid5f", 00:40:23.935 "superblock": true, 00:40:23.935 "num_base_bdevs": 3, 00:40:23.935 "num_base_bdevs_discovered": 3, 00:40:23.935 "num_base_bdevs_operational": 3, 00:40:23.935 "base_bdevs_list": [ 00:40:23.935 { 00:40:23.935 "name": "spare", 00:40:23.935 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:23.935 "is_configured": true, 00:40:23.935 "data_offset": 2048, 00:40:23.935 "data_size": 63488 00:40:23.935 }, 00:40:23.935 { 00:40:23.935 "name": "BaseBdev2", 00:40:23.935 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:23.935 "is_configured": true, 00:40:23.935 "data_offset": 2048, 00:40:23.935 "data_size": 63488 00:40:23.935 }, 00:40:23.935 { 00:40:23.935 "name": "BaseBdev3", 00:40:23.935 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:23.935 "is_configured": true, 00:40:23.935 "data_offset": 2048, 00:40:23.935 "data_size": 63488 00:40:23.935 } 00:40:23.935 ] 00:40:23.935 }' 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:23.935 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:24.194 "name": "raid_bdev1", 00:40:24.194 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:24.194 "strip_size_kb": 64, 00:40:24.194 "state": "online", 00:40:24.194 "raid_level": "raid5f", 00:40:24.194 "superblock": true, 00:40:24.194 "num_base_bdevs": 3, 00:40:24.194 "num_base_bdevs_discovered": 3, 00:40:24.194 "num_base_bdevs_operational": 3, 00:40:24.194 "base_bdevs_list": [ 00:40:24.194 { 00:40:24.194 "name": "spare", 00:40:24.194 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:24.194 "is_configured": true, 00:40:24.194 "data_offset": 2048, 00:40:24.194 "data_size": 63488 00:40:24.194 }, 00:40:24.194 { 00:40:24.194 "name": "BaseBdev2", 00:40:24.194 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:24.194 "is_configured": true, 00:40:24.194 "data_offset": 2048, 00:40:24.194 "data_size": 63488 00:40:24.194 }, 00:40:24.194 { 00:40:24.194 "name": "BaseBdev3", 00:40:24.194 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:24.194 "is_configured": true, 00:40:24.194 "data_offset": 2048, 00:40:24.194 "data_size": 63488 00:40:24.194 } 00:40:24.194 ] 00:40:24.194 }' 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:24.194 "name": "raid_bdev1", 00:40:24.194 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:24.194 "strip_size_kb": 64, 00:40:24.194 "state": "online", 00:40:24.194 "raid_level": "raid5f", 00:40:24.194 "superblock": true, 00:40:24.194 "num_base_bdevs": 3, 00:40:24.194 "num_base_bdevs_discovered": 3, 00:40:24.194 "num_base_bdevs_operational": 3, 00:40:24.194 "base_bdevs_list": [ 00:40:24.194 { 00:40:24.194 "name": "spare", 00:40:24.194 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:24.194 "is_configured": true, 00:40:24.194 "data_offset": 2048, 00:40:24.194 "data_size": 63488 00:40:24.194 }, 00:40:24.194 { 00:40:24.194 "name": "BaseBdev2", 00:40:24.194 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:24.194 "is_configured": true, 00:40:24.194 "data_offset": 2048, 00:40:24.194 "data_size": 63488 00:40:24.194 }, 00:40:24.194 { 00:40:24.194 "name": "BaseBdev3", 00:40:24.194 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:24.194 "is_configured": true, 00:40:24.194 "data_offset": 2048, 00:40:24.194 "data_size": 63488 00:40:24.194 } 00:40:24.194 ] 00:40:24.194 }' 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:24.194 23:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.760 [2024-12-09 23:22:05.164761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:24.760 [2024-12-09 23:22:05.164800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:24.760 [2024-12-09 23:22:05.164895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:24.760 [2024-12-09 23:22:05.164986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:24.760 [2024-12-09 23:22:05.165015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:24.760 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:40:25.016 /dev/nbd0 00:40:25.016 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:25.016 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:25.016 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:25.016 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:40:25.016 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:25.016 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:25.017 1+0 records in 00:40:25.017 1+0 records out 00:40:25.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244723 s, 16.7 MB/s 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:25.017 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:40:25.273 /dev/nbd1 00:40:25.273 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:25.273 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:25.273 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:40:25.273 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:40:25.273 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:25.273 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:25.273 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:40:25.273 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:40:25.273 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:25.273 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:25.274 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:25.274 1+0 records in 00:40:25.274 1+0 records out 00:40:25.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346829 s, 11.8 MB/s 00:40:25.274 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:25.274 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:40:25.274 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:25.274 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:25.274 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:40:25.274 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:25.274 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:25.274 23:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:40:25.532 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:40:25.532 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:40:25.532 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:25.532 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:25.532 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:40:25.532 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:25.532 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:40:25.790 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:25.790 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:25.790 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:25.790 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:25.790 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:25.790 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:25.790 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:40:25.790 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:40:25.790 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:25.790 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.048 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.048 [2024-12-09 23:22:06.495998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:26.048 [2024-12-09 23:22:06.496063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:26.048 [2024-12-09 23:22:06.496086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:40:26.048 [2024-12-09 23:22:06.496100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:26.049 [2024-12-09 23:22:06.498721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:26.049 [2024-12-09 23:22:06.498769] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:26.049 [2024-12-09 23:22:06.498868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:26.049 [2024-12-09 23:22:06.498928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:26.049 [2024-12-09 23:22:06.499106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:26.049 [2024-12-09 23:22:06.499213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:26.049 spare 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.049 [2024-12-09 23:22:06.599150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:40:26.049 [2024-12-09 23:22:06.599220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:26.049 [2024-12-09 23:22:06.599582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:40:26.049 [2024-12-09 23:22:06.605566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:40:26.049 [2024-12-09 23:22:06.605593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:40:26.049 [2024-12-09 23:22:06.605820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:26.049 "name": "raid_bdev1", 00:40:26.049 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:26.049 "strip_size_kb": 64, 00:40:26.049 "state": "online", 00:40:26.049 "raid_level": "raid5f", 00:40:26.049 "superblock": true, 00:40:26.049 "num_base_bdevs": 3, 00:40:26.049 "num_base_bdevs_discovered": 3, 00:40:26.049 "num_base_bdevs_operational": 3, 00:40:26.049 "base_bdevs_list": [ 00:40:26.049 { 00:40:26.049 "name": "spare", 00:40:26.049 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:26.049 "is_configured": true, 00:40:26.049 "data_offset": 2048, 00:40:26.049 "data_size": 63488 00:40:26.049 }, 00:40:26.049 { 00:40:26.049 "name": "BaseBdev2", 00:40:26.049 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:26.049 "is_configured": true, 00:40:26.049 "data_offset": 2048, 00:40:26.049 "data_size": 63488 00:40:26.049 }, 00:40:26.049 { 00:40:26.049 "name": "BaseBdev3", 00:40:26.049 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:26.049 "is_configured": true, 00:40:26.049 "data_offset": 2048, 00:40:26.049 "data_size": 63488 00:40:26.049 } 00:40:26.049 ] 00:40:26.049 }' 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:26.049 23:22:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:26.618 "name": "raid_bdev1", 00:40:26.618 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:26.618 "strip_size_kb": 64, 00:40:26.618 "state": "online", 00:40:26.618 "raid_level": "raid5f", 00:40:26.618 "superblock": true, 00:40:26.618 "num_base_bdevs": 3, 00:40:26.618 "num_base_bdevs_discovered": 3, 00:40:26.618 "num_base_bdevs_operational": 3, 00:40:26.618 "base_bdevs_list": [ 00:40:26.618 { 00:40:26.618 "name": "spare", 00:40:26.618 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:26.618 "is_configured": true, 00:40:26.618 "data_offset": 2048, 00:40:26.618 "data_size": 63488 00:40:26.618 }, 00:40:26.618 { 00:40:26.618 "name": "BaseBdev2", 00:40:26.618 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:26.618 "is_configured": true, 00:40:26.618 "data_offset": 2048, 00:40:26.618 "data_size": 63488 00:40:26.618 }, 00:40:26.618 { 00:40:26.618 "name": "BaseBdev3", 00:40:26.618 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:26.618 "is_configured": true, 00:40:26.618 "data_offset": 2048, 00:40:26.618 "data_size": 63488 00:40:26.618 } 00:40:26.618 ] 00:40:26.618 }' 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.618 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.618 [2024-12-09 23:22:07.251264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:26.877 "name": "raid_bdev1", 00:40:26.877 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:26.877 "strip_size_kb": 64, 00:40:26.877 "state": "online", 00:40:26.877 "raid_level": "raid5f", 00:40:26.877 "superblock": true, 00:40:26.877 "num_base_bdevs": 3, 00:40:26.877 "num_base_bdevs_discovered": 2, 00:40:26.877 "num_base_bdevs_operational": 2, 00:40:26.877 "base_bdevs_list": [ 00:40:26.877 { 00:40:26.877 "name": null, 00:40:26.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:26.877 "is_configured": false, 00:40:26.877 "data_offset": 0, 00:40:26.877 "data_size": 63488 00:40:26.877 }, 00:40:26.877 { 00:40:26.877 "name": "BaseBdev2", 00:40:26.877 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:26.877 "is_configured": true, 00:40:26.877 "data_offset": 2048, 00:40:26.877 "data_size": 63488 00:40:26.877 }, 00:40:26.877 { 00:40:26.877 "name": "BaseBdev3", 00:40:26.877 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:26.877 "is_configured": true, 00:40:26.877 "data_offset": 2048, 00:40:26.877 "data_size": 63488 00:40:26.877 } 00:40:26.877 ] 00:40:26.877 }' 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:26.877 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:27.136 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:27.136 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.136 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:27.136 [2024-12-09 23:22:07.694682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:27.136 [2024-12-09 23:22:07.694886] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:40:27.136 [2024-12-09 23:22:07.694907] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:27.136 [2024-12-09 23:22:07.694955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:27.136 [2024-12-09 23:22:07.711171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:40:27.136 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.136 23:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:40:27.136 [2024-12-09 23:22:07.719545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:28.513 "name": "raid_bdev1", 00:40:28.513 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:28.513 "strip_size_kb": 64, 00:40:28.513 "state": "online", 00:40:28.513 "raid_level": "raid5f", 00:40:28.513 "superblock": true, 00:40:28.513 "num_base_bdevs": 3, 00:40:28.513 "num_base_bdevs_discovered": 3, 00:40:28.513 "num_base_bdevs_operational": 3, 00:40:28.513 "process": { 00:40:28.513 "type": "rebuild", 00:40:28.513 "target": "spare", 00:40:28.513 "progress": { 00:40:28.513 "blocks": 18432, 00:40:28.513 "percent": 14 00:40:28.513 } 00:40:28.513 }, 00:40:28.513 "base_bdevs_list": [ 00:40:28.513 { 00:40:28.513 "name": "spare", 00:40:28.513 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:28.513 "is_configured": true, 00:40:28.513 "data_offset": 2048, 00:40:28.513 "data_size": 63488 00:40:28.513 }, 00:40:28.513 { 00:40:28.513 "name": "BaseBdev2", 00:40:28.513 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:28.513 "is_configured": true, 00:40:28.513 "data_offset": 2048, 00:40:28.513 "data_size": 63488 00:40:28.513 }, 00:40:28.513 { 00:40:28.513 "name": "BaseBdev3", 00:40:28.513 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:28.513 "is_configured": true, 00:40:28.513 "data_offset": 2048, 00:40:28.513 "data_size": 63488 00:40:28.513 } 00:40:28.513 ] 00:40:28.513 }' 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.513 [2024-12-09 23:22:08.843104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:28.513 [2024-12-09 23:22:08.929577] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:28.513 [2024-12-09 23:22:08.929654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:28.513 [2024-12-09 23:22:08.929672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:28.513 [2024-12-09 23:22:08.929685] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:28.513 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:28.514 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:28.514 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:28.514 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:28.514 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:28.514 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:28.514 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:28.514 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.514 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.514 23:22:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.514 23:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:28.514 "name": "raid_bdev1", 00:40:28.514 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:28.514 "strip_size_kb": 64, 00:40:28.514 "state": "online", 00:40:28.514 "raid_level": "raid5f", 00:40:28.514 "superblock": true, 00:40:28.514 "num_base_bdevs": 3, 00:40:28.514 "num_base_bdevs_discovered": 2, 00:40:28.514 "num_base_bdevs_operational": 2, 00:40:28.514 "base_bdevs_list": [ 00:40:28.514 { 00:40:28.514 "name": null, 00:40:28.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:28.514 "is_configured": false, 00:40:28.514 "data_offset": 0, 00:40:28.514 "data_size": 63488 00:40:28.514 }, 00:40:28.514 { 00:40:28.514 "name": "BaseBdev2", 00:40:28.514 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:28.514 "is_configured": true, 00:40:28.514 "data_offset": 2048, 00:40:28.514 "data_size": 63488 00:40:28.514 }, 00:40:28.514 { 00:40:28.514 "name": "BaseBdev3", 00:40:28.514 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:28.514 "is_configured": true, 00:40:28.514 "data_offset": 2048, 00:40:28.514 "data_size": 63488 00:40:28.514 } 00:40:28.514 ] 00:40:28.514 }' 00:40:28.514 23:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:28.514 23:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:29.080 23:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:29.080 23:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.080 23:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:29.080 [2024-12-09 23:22:09.430556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:29.080 [2024-12-09 23:22:09.430642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:29.080 [2024-12-09 23:22:09.430668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:40:29.080 [2024-12-09 23:22:09.430687] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:29.080 [2024-12-09 23:22:09.431233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:29.080 [2024-12-09 23:22:09.431266] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:29.080 [2024-12-09 23:22:09.431376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:29.080 [2024-12-09 23:22:09.431410] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:40:29.080 [2024-12-09 23:22:09.431423] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:29.080 [2024-12-09 23:22:09.431451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:29.080 [2024-12-09 23:22:09.447344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:40:29.080 spare 00:40:29.080 23:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.080 23:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:40:29.080 [2024-12-09 23:22:09.455166] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:30.014 "name": "raid_bdev1", 00:40:30.014 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:30.014 "strip_size_kb": 64, 00:40:30.014 "state": "online", 00:40:30.014 "raid_level": "raid5f", 00:40:30.014 "superblock": true, 00:40:30.014 "num_base_bdevs": 3, 00:40:30.014 "num_base_bdevs_discovered": 3, 00:40:30.014 "num_base_bdevs_operational": 3, 00:40:30.014 "process": { 00:40:30.014 "type": "rebuild", 00:40:30.014 "target": "spare", 00:40:30.014 "progress": { 00:40:30.014 "blocks": 20480, 00:40:30.014 "percent": 16 00:40:30.014 } 00:40:30.014 }, 00:40:30.014 "base_bdevs_list": [ 00:40:30.014 { 00:40:30.014 "name": "spare", 00:40:30.014 "uuid": "f6d281b5-09db-568d-9e75-8e15b9794fb4", 00:40:30.014 "is_configured": true, 00:40:30.014 "data_offset": 2048, 00:40:30.014 "data_size": 63488 00:40:30.014 }, 00:40:30.014 { 00:40:30.014 "name": "BaseBdev2", 00:40:30.014 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:30.014 "is_configured": true, 00:40:30.014 "data_offset": 2048, 00:40:30.014 "data_size": 63488 00:40:30.014 }, 00:40:30.014 { 00:40:30.014 "name": "BaseBdev3", 00:40:30.014 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:30.014 "is_configured": true, 00:40:30.014 "data_offset": 2048, 00:40:30.014 "data_size": 63488 00:40:30.014 } 00:40:30.014 ] 00:40:30.014 }' 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.014 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:30.014 [2024-12-09 23:22:10.582569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:30.274 [2024-12-09 23:22:10.664303] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:30.274 [2024-12-09 23:22:10.664372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:30.274 [2024-12-09 23:22:10.664404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:30.274 [2024-12-09 23:22:10.664414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:30.274 "name": "raid_bdev1", 00:40:30.274 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:30.274 "strip_size_kb": 64, 00:40:30.274 "state": "online", 00:40:30.274 "raid_level": "raid5f", 00:40:30.274 "superblock": true, 00:40:30.274 "num_base_bdevs": 3, 00:40:30.274 "num_base_bdevs_discovered": 2, 00:40:30.274 "num_base_bdevs_operational": 2, 00:40:30.274 "base_bdevs_list": [ 00:40:30.274 { 00:40:30.274 "name": null, 00:40:30.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:30.274 "is_configured": false, 00:40:30.274 "data_offset": 0, 00:40:30.274 "data_size": 63488 00:40:30.274 }, 00:40:30.274 { 00:40:30.274 "name": "BaseBdev2", 00:40:30.274 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:30.274 "is_configured": true, 00:40:30.274 "data_offset": 2048, 00:40:30.274 "data_size": 63488 00:40:30.274 }, 00:40:30.274 { 00:40:30.274 "name": "BaseBdev3", 00:40:30.274 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:30.274 "is_configured": true, 00:40:30.274 "data_offset": 2048, 00:40:30.274 "data_size": 63488 00:40:30.274 } 00:40:30.274 ] 00:40:30.274 }' 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:30.274 23:22:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:30.532 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:30.532 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:30.532 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:30.532 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:30.532 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:30.532 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:30.532 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:30.532 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.532 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:30.792 "name": "raid_bdev1", 00:40:30.792 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:30.792 "strip_size_kb": 64, 00:40:30.792 "state": "online", 00:40:30.792 "raid_level": "raid5f", 00:40:30.792 "superblock": true, 00:40:30.792 "num_base_bdevs": 3, 00:40:30.792 "num_base_bdevs_discovered": 2, 00:40:30.792 "num_base_bdevs_operational": 2, 00:40:30.792 "base_bdevs_list": [ 00:40:30.792 { 00:40:30.792 "name": null, 00:40:30.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:30.792 "is_configured": false, 00:40:30.792 "data_offset": 0, 00:40:30.792 "data_size": 63488 00:40:30.792 }, 00:40:30.792 { 00:40:30.792 "name": "BaseBdev2", 00:40:30.792 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:30.792 "is_configured": true, 00:40:30.792 "data_offset": 2048, 00:40:30.792 "data_size": 63488 00:40:30.792 }, 00:40:30.792 { 00:40:30.792 "name": "BaseBdev3", 00:40:30.792 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:30.792 "is_configured": true, 00:40:30.792 "data_offset": 2048, 00:40:30.792 "data_size": 63488 00:40:30.792 } 00:40:30.792 ] 00:40:30.792 }' 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:30.792 [2024-12-09 23:22:11.288354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:30.792 [2024-12-09 23:22:11.288440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:30.792 [2024-12-09 23:22:11.288475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:40:30.792 [2024-12-09 23:22:11.288488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:30.792 [2024-12-09 23:22:11.289055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:30.792 [2024-12-09 23:22:11.289086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:30.792 [2024-12-09 23:22:11.289195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:40:30.792 [2024-12-09 23:22:11.289214] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:40:30.792 [2024-12-09 23:22:11.289242] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:30.792 [2024-12-09 23:22:11.289256] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:40:30.792 BaseBdev1 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.792 23:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.729 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:31.729 "name": "raid_bdev1", 00:40:31.729 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:31.729 "strip_size_kb": 64, 00:40:31.729 "state": "online", 00:40:31.729 "raid_level": "raid5f", 00:40:31.729 "superblock": true, 00:40:31.729 "num_base_bdevs": 3, 00:40:31.729 "num_base_bdevs_discovered": 2, 00:40:31.729 "num_base_bdevs_operational": 2, 00:40:31.729 "base_bdevs_list": [ 00:40:31.729 { 00:40:31.729 "name": null, 00:40:31.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:31.729 "is_configured": false, 00:40:31.729 "data_offset": 0, 00:40:31.729 "data_size": 63488 00:40:31.729 }, 00:40:31.729 { 00:40:31.729 "name": "BaseBdev2", 00:40:31.729 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:31.729 "is_configured": true, 00:40:31.729 "data_offset": 2048, 00:40:31.730 "data_size": 63488 00:40:31.730 }, 00:40:31.730 { 00:40:31.730 "name": "BaseBdev3", 00:40:31.730 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:31.730 "is_configured": true, 00:40:31.730 "data_offset": 2048, 00:40:31.730 "data_size": 63488 00:40:31.730 } 00:40:31.730 ] 00:40:31.730 }' 00:40:31.730 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:31.730 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:32.298 "name": "raid_bdev1", 00:40:32.298 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:32.298 "strip_size_kb": 64, 00:40:32.298 "state": "online", 00:40:32.298 "raid_level": "raid5f", 00:40:32.298 "superblock": true, 00:40:32.298 "num_base_bdevs": 3, 00:40:32.298 "num_base_bdevs_discovered": 2, 00:40:32.298 "num_base_bdevs_operational": 2, 00:40:32.298 "base_bdevs_list": [ 00:40:32.298 { 00:40:32.298 "name": null, 00:40:32.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:32.298 "is_configured": false, 00:40:32.298 "data_offset": 0, 00:40:32.298 "data_size": 63488 00:40:32.298 }, 00:40:32.298 { 00:40:32.298 "name": "BaseBdev2", 00:40:32.298 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:32.298 "is_configured": true, 00:40:32.298 "data_offset": 2048, 00:40:32.298 "data_size": 63488 00:40:32.298 }, 00:40:32.298 { 00:40:32.298 "name": "BaseBdev3", 00:40:32.298 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:32.298 "is_configured": true, 00:40:32.298 "data_offset": 2048, 00:40:32.298 "data_size": 63488 00:40:32.298 } 00:40:32.298 ] 00:40:32.298 }' 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:32.298 [2024-12-09 23:22:12.866329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:32.298 [2024-12-09 23:22:12.866526] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:40:32.298 [2024-12-09 23:22:12.866546] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:32.298 request: 00:40:32.298 { 00:40:32.298 "base_bdev": "BaseBdev1", 00:40:32.298 "raid_bdev": "raid_bdev1", 00:40:32.298 "method": "bdev_raid_add_base_bdev", 00:40:32.298 "req_id": 1 00:40:32.298 } 00:40:32.298 Got JSON-RPC error response 00:40:32.298 response: 00:40:32.298 { 00:40:32.298 "code": -22, 00:40:32.298 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:40:32.298 } 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:32.298 23:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:33.676 "name": "raid_bdev1", 00:40:33.676 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:33.676 "strip_size_kb": 64, 00:40:33.676 "state": "online", 00:40:33.676 "raid_level": "raid5f", 00:40:33.676 "superblock": true, 00:40:33.676 "num_base_bdevs": 3, 00:40:33.676 "num_base_bdevs_discovered": 2, 00:40:33.676 "num_base_bdevs_operational": 2, 00:40:33.676 "base_bdevs_list": [ 00:40:33.676 { 00:40:33.676 "name": null, 00:40:33.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:33.676 "is_configured": false, 00:40:33.676 "data_offset": 0, 00:40:33.676 "data_size": 63488 00:40:33.676 }, 00:40:33.676 { 00:40:33.676 "name": "BaseBdev2", 00:40:33.676 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:33.676 "is_configured": true, 00:40:33.676 "data_offset": 2048, 00:40:33.676 "data_size": 63488 00:40:33.676 }, 00:40:33.676 { 00:40:33.676 "name": "BaseBdev3", 00:40:33.676 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:33.676 "is_configured": true, 00:40:33.676 "data_offset": 2048, 00:40:33.676 "data_size": 63488 00:40:33.676 } 00:40:33.676 ] 00:40:33.676 }' 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:33.676 23:22:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:33.936 "name": "raid_bdev1", 00:40:33.936 "uuid": "f1de724a-f67b-49a9-88c8-d60449a413bc", 00:40:33.936 "strip_size_kb": 64, 00:40:33.936 "state": "online", 00:40:33.936 "raid_level": "raid5f", 00:40:33.936 "superblock": true, 00:40:33.936 "num_base_bdevs": 3, 00:40:33.936 "num_base_bdevs_discovered": 2, 00:40:33.936 "num_base_bdevs_operational": 2, 00:40:33.936 "base_bdevs_list": [ 00:40:33.936 { 00:40:33.936 "name": null, 00:40:33.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:33.936 "is_configured": false, 00:40:33.936 "data_offset": 0, 00:40:33.936 "data_size": 63488 00:40:33.936 }, 00:40:33.936 { 00:40:33.936 "name": "BaseBdev2", 00:40:33.936 "uuid": "0644bc9d-3e20-5402-9bd9-b10037c22a16", 00:40:33.936 "is_configured": true, 00:40:33.936 "data_offset": 2048, 00:40:33.936 "data_size": 63488 00:40:33.936 }, 00:40:33.936 { 00:40:33.936 "name": "BaseBdev3", 00:40:33.936 "uuid": "5a206502-212b-57e1-8881-0322af769232", 00:40:33.936 "is_configured": true, 00:40:33.936 "data_offset": 2048, 00:40:33.936 "data_size": 63488 00:40:33.936 } 00:40:33.936 ] 00:40:33.936 }' 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81896 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81896 ']' 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81896 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81896 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:33.936 killing process with pid 81896 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81896' 00:40:33.936 Received shutdown signal, test time was about 60.000000 seconds 00:40:33.936 00:40:33.936 Latency(us) 00:40:33.936 [2024-12-09T23:22:14.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:33.936 [2024-12-09T23:22:14.572Z] =================================================================================================================== 00:40:33.936 [2024-12-09T23:22:14.572Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81896 00:40:33.936 [2024-12-09 23:22:14.503488] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:33.936 23:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81896 00:40:33.936 [2024-12-09 23:22:14.503621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:33.936 [2024-12-09 23:22:14.503686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:33.936 [2024-12-09 23:22:14.503701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:40:34.517 [2024-12-09 23:22:14.905241] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:35.497 23:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:40:35.497 00:40:35.497 real 0m23.613s 00:40:35.497 user 0m30.198s 00:40:35.497 sys 0m3.134s 00:40:35.497 23:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:35.497 ************************************ 00:40:35.497 END TEST raid5f_rebuild_test_sb 00:40:35.497 23:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:35.497 ************************************ 00:40:35.497 23:22:16 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:40:35.497 23:22:16 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:40:35.497 23:22:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:40:35.497 23:22:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:35.497 23:22:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:35.497 ************************************ 00:40:35.497 START TEST raid5f_state_function_test 00:40:35.497 ************************************ 00:40:35.497 23:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:40:35.497 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:40:35.497 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:40:35.497 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:40:35.497 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:40:35.497 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82655 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:40:35.756 Process raid pid: 82655 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82655' 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82655 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82655 ']' 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:35.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:35.756 23:22:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.756 [2024-12-09 23:22:16.230844] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:40:35.756 [2024-12-09 23:22:16.230964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:36.015 [2024-12-09 23:22:16.411661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.015 [2024-12-09 23:22:16.530833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.274 [2024-12-09 23:22:16.720986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:36.274 [2024-12-09 23:22:16.721032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.533 [2024-12-09 23:22:17.064017] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:36.533 [2024-12-09 23:22:17.064097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:36.533 [2024-12-09 23:22:17.064117] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:36.533 [2024-12-09 23:22:17.064138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:36.533 [2024-12-09 23:22:17.064152] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:36.533 [2024-12-09 23:22:17.064172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:36.533 [2024-12-09 23:22:17.064186] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:40:36.533 [2024-12-09 23:22:17.064208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:36.533 "name": "Existed_Raid", 00:40:36.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:36.533 "strip_size_kb": 64, 00:40:36.533 "state": "configuring", 00:40:36.533 "raid_level": "raid5f", 00:40:36.533 "superblock": false, 00:40:36.533 "num_base_bdevs": 4, 00:40:36.533 "num_base_bdevs_discovered": 0, 00:40:36.533 "num_base_bdevs_operational": 4, 00:40:36.533 "base_bdevs_list": [ 00:40:36.533 { 00:40:36.533 "name": "BaseBdev1", 00:40:36.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:36.533 "is_configured": false, 00:40:36.533 "data_offset": 0, 00:40:36.533 "data_size": 0 00:40:36.533 }, 00:40:36.533 { 00:40:36.533 "name": "BaseBdev2", 00:40:36.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:36.533 "is_configured": false, 00:40:36.533 "data_offset": 0, 00:40:36.533 "data_size": 0 00:40:36.533 }, 00:40:36.533 { 00:40:36.533 "name": "BaseBdev3", 00:40:36.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:36.533 "is_configured": false, 00:40:36.533 "data_offset": 0, 00:40:36.533 "data_size": 0 00:40:36.533 }, 00:40:36.533 { 00:40:36.533 "name": "BaseBdev4", 00:40:36.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:36.533 "is_configured": false, 00:40:36.533 "data_offset": 0, 00:40:36.533 "data_size": 0 00:40:36.533 } 00:40:36.533 ] 00:40:36.533 }' 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:36.533 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.102 [2024-12-09 23:22:17.443435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:37.102 [2024-12-09 23:22:17.443483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.102 [2024-12-09 23:22:17.455387] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:37.102 [2024-12-09 23:22:17.455450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:37.102 [2024-12-09 23:22:17.455461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:37.102 [2024-12-09 23:22:17.455473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:37.102 [2024-12-09 23:22:17.455481] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:37.102 [2024-12-09 23:22:17.455494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:37.102 [2024-12-09 23:22:17.455502] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:40:37.102 [2024-12-09 23:22:17.455513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.102 [2024-12-09 23:22:17.506935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:37.102 BaseBdev1 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.102 [ 00:40:37.102 { 00:40:37.102 "name": "BaseBdev1", 00:40:37.102 "aliases": [ 00:40:37.102 "6d015460-b1c1-4aa3-a6d1-0ce1a6c36ba0" 00:40:37.102 ], 00:40:37.102 "product_name": "Malloc disk", 00:40:37.102 "block_size": 512, 00:40:37.102 "num_blocks": 65536, 00:40:37.102 "uuid": "6d015460-b1c1-4aa3-a6d1-0ce1a6c36ba0", 00:40:37.102 "assigned_rate_limits": { 00:40:37.102 "rw_ios_per_sec": 0, 00:40:37.102 "rw_mbytes_per_sec": 0, 00:40:37.102 "r_mbytes_per_sec": 0, 00:40:37.102 "w_mbytes_per_sec": 0 00:40:37.102 }, 00:40:37.102 "claimed": true, 00:40:37.102 "claim_type": "exclusive_write", 00:40:37.102 "zoned": false, 00:40:37.102 "supported_io_types": { 00:40:37.102 "read": true, 00:40:37.102 "write": true, 00:40:37.102 "unmap": true, 00:40:37.102 "flush": true, 00:40:37.102 "reset": true, 00:40:37.102 "nvme_admin": false, 00:40:37.102 "nvme_io": false, 00:40:37.102 "nvme_io_md": false, 00:40:37.102 "write_zeroes": true, 00:40:37.102 "zcopy": true, 00:40:37.102 "get_zone_info": false, 00:40:37.102 "zone_management": false, 00:40:37.102 "zone_append": false, 00:40:37.102 "compare": false, 00:40:37.102 "compare_and_write": false, 00:40:37.102 "abort": true, 00:40:37.102 "seek_hole": false, 00:40:37.102 "seek_data": false, 00:40:37.102 "copy": true, 00:40:37.102 "nvme_iov_md": false 00:40:37.102 }, 00:40:37.102 "memory_domains": [ 00:40:37.102 { 00:40:37.102 "dma_device_id": "system", 00:40:37.102 "dma_device_type": 1 00:40:37.102 }, 00:40:37.102 { 00:40:37.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:37.102 "dma_device_type": 2 00:40:37.102 } 00:40:37.102 ], 00:40:37.102 "driver_specific": {} 00:40:37.102 } 00:40:37.102 ] 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.102 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:37.102 "name": "Existed_Raid", 00:40:37.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.102 "strip_size_kb": 64, 00:40:37.102 "state": "configuring", 00:40:37.102 "raid_level": "raid5f", 00:40:37.102 "superblock": false, 00:40:37.102 "num_base_bdevs": 4, 00:40:37.102 "num_base_bdevs_discovered": 1, 00:40:37.102 "num_base_bdevs_operational": 4, 00:40:37.102 "base_bdevs_list": [ 00:40:37.102 { 00:40:37.102 "name": "BaseBdev1", 00:40:37.102 "uuid": "6d015460-b1c1-4aa3-a6d1-0ce1a6c36ba0", 00:40:37.102 "is_configured": true, 00:40:37.102 "data_offset": 0, 00:40:37.103 "data_size": 65536 00:40:37.103 }, 00:40:37.103 { 00:40:37.103 "name": "BaseBdev2", 00:40:37.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.103 "is_configured": false, 00:40:37.103 "data_offset": 0, 00:40:37.103 "data_size": 0 00:40:37.103 }, 00:40:37.103 { 00:40:37.103 "name": "BaseBdev3", 00:40:37.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.103 "is_configured": false, 00:40:37.103 "data_offset": 0, 00:40:37.103 "data_size": 0 00:40:37.103 }, 00:40:37.103 { 00:40:37.103 "name": "BaseBdev4", 00:40:37.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.103 "is_configured": false, 00:40:37.103 "data_offset": 0, 00:40:37.103 "data_size": 0 00:40:37.103 } 00:40:37.103 ] 00:40:37.103 }' 00:40:37.103 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:37.103 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.362 [2024-12-09 23:22:17.978557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:37.362 [2024-12-09 23:22:17.978620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.362 [2024-12-09 23:22:17.990607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:37.362 [2024-12-09 23:22:17.992695] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:37.362 [2024-12-09 23:22:17.992744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:37.362 [2024-12-09 23:22:17.992756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:37.362 [2024-12-09 23:22:17.992771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:37.362 [2024-12-09 23:22:17.992779] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:40:37.362 [2024-12-09 23:22:17.992791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:37.362 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:37.622 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:37.622 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:37.622 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:37.622 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:37.622 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:37.622 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:37.622 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:37.622 23:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:37.622 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:37.622 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:37.622 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.622 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.622 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.622 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:37.622 "name": "Existed_Raid", 00:40:37.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.622 "strip_size_kb": 64, 00:40:37.622 "state": "configuring", 00:40:37.622 "raid_level": "raid5f", 00:40:37.622 "superblock": false, 00:40:37.622 "num_base_bdevs": 4, 00:40:37.622 "num_base_bdevs_discovered": 1, 00:40:37.622 "num_base_bdevs_operational": 4, 00:40:37.622 "base_bdevs_list": [ 00:40:37.622 { 00:40:37.622 "name": "BaseBdev1", 00:40:37.622 "uuid": "6d015460-b1c1-4aa3-a6d1-0ce1a6c36ba0", 00:40:37.622 "is_configured": true, 00:40:37.622 "data_offset": 0, 00:40:37.622 "data_size": 65536 00:40:37.622 }, 00:40:37.622 { 00:40:37.622 "name": "BaseBdev2", 00:40:37.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.622 "is_configured": false, 00:40:37.622 "data_offset": 0, 00:40:37.622 "data_size": 0 00:40:37.622 }, 00:40:37.622 { 00:40:37.622 "name": "BaseBdev3", 00:40:37.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.622 "is_configured": false, 00:40:37.622 "data_offset": 0, 00:40:37.622 "data_size": 0 00:40:37.622 }, 00:40:37.622 { 00:40:37.622 "name": "BaseBdev4", 00:40:37.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.622 "is_configured": false, 00:40:37.622 "data_offset": 0, 00:40:37.622 "data_size": 0 00:40:37.622 } 00:40:37.622 ] 00:40:37.622 }' 00:40:37.622 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:37.622 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.881 [2024-12-09 23:22:18.472291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:37.881 BaseBdev2 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.881 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.881 [ 00:40:37.881 { 00:40:37.881 "name": "BaseBdev2", 00:40:37.881 "aliases": [ 00:40:37.881 "ce8efb0b-3f19-47b0-aa22-004c7c401faf" 00:40:37.881 ], 00:40:37.881 "product_name": "Malloc disk", 00:40:37.881 "block_size": 512, 00:40:37.881 "num_blocks": 65536, 00:40:37.881 "uuid": "ce8efb0b-3f19-47b0-aa22-004c7c401faf", 00:40:37.881 "assigned_rate_limits": { 00:40:37.881 "rw_ios_per_sec": 0, 00:40:37.881 "rw_mbytes_per_sec": 0, 00:40:37.881 "r_mbytes_per_sec": 0, 00:40:37.881 "w_mbytes_per_sec": 0 00:40:37.881 }, 00:40:37.881 "claimed": true, 00:40:37.881 "claim_type": "exclusive_write", 00:40:37.881 "zoned": false, 00:40:37.881 "supported_io_types": { 00:40:37.881 "read": true, 00:40:37.881 "write": true, 00:40:37.881 "unmap": true, 00:40:37.881 "flush": true, 00:40:37.881 "reset": true, 00:40:37.881 "nvme_admin": false, 00:40:37.881 "nvme_io": false, 00:40:37.881 "nvme_io_md": false, 00:40:37.881 "write_zeroes": true, 00:40:37.881 "zcopy": true, 00:40:37.881 "get_zone_info": false, 00:40:37.881 "zone_management": false, 00:40:37.881 "zone_append": false, 00:40:37.881 "compare": false, 00:40:37.881 "compare_and_write": false, 00:40:37.881 "abort": true, 00:40:37.881 "seek_hole": false, 00:40:37.881 "seek_data": false, 00:40:37.881 "copy": true, 00:40:37.881 "nvme_iov_md": false 00:40:37.881 }, 00:40:37.881 "memory_domains": [ 00:40:37.881 { 00:40:38.141 "dma_device_id": "system", 00:40:38.141 "dma_device_type": 1 00:40:38.141 }, 00:40:38.141 { 00:40:38.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:38.141 "dma_device_type": 2 00:40:38.141 } 00:40:38.141 ], 00:40:38.141 "driver_specific": {} 00:40:38.141 } 00:40:38.141 ] 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:38.141 "name": "Existed_Raid", 00:40:38.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:38.141 "strip_size_kb": 64, 00:40:38.141 "state": "configuring", 00:40:38.141 "raid_level": "raid5f", 00:40:38.141 "superblock": false, 00:40:38.141 "num_base_bdevs": 4, 00:40:38.141 "num_base_bdevs_discovered": 2, 00:40:38.141 "num_base_bdevs_operational": 4, 00:40:38.141 "base_bdevs_list": [ 00:40:38.141 { 00:40:38.141 "name": "BaseBdev1", 00:40:38.141 "uuid": "6d015460-b1c1-4aa3-a6d1-0ce1a6c36ba0", 00:40:38.141 "is_configured": true, 00:40:38.141 "data_offset": 0, 00:40:38.141 "data_size": 65536 00:40:38.141 }, 00:40:38.141 { 00:40:38.141 "name": "BaseBdev2", 00:40:38.141 "uuid": "ce8efb0b-3f19-47b0-aa22-004c7c401faf", 00:40:38.141 "is_configured": true, 00:40:38.141 "data_offset": 0, 00:40:38.141 "data_size": 65536 00:40:38.141 }, 00:40:38.141 { 00:40:38.141 "name": "BaseBdev3", 00:40:38.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:38.141 "is_configured": false, 00:40:38.141 "data_offset": 0, 00:40:38.141 "data_size": 0 00:40:38.141 }, 00:40:38.141 { 00:40:38.141 "name": "BaseBdev4", 00:40:38.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:38.141 "is_configured": false, 00:40:38.141 "data_offset": 0, 00:40:38.141 "data_size": 0 00:40:38.141 } 00:40:38.141 ] 00:40:38.141 }' 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:38.141 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.400 [2024-12-09 23:22:18.992532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:38.400 BaseBdev3 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:38.400 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.401 23:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.401 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.401 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:40:38.401 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.401 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.401 [ 00:40:38.401 { 00:40:38.401 "name": "BaseBdev3", 00:40:38.401 "aliases": [ 00:40:38.401 "66160eaa-0cc2-460b-bfce-328f2e332ac4" 00:40:38.401 ], 00:40:38.401 "product_name": "Malloc disk", 00:40:38.401 "block_size": 512, 00:40:38.401 "num_blocks": 65536, 00:40:38.401 "uuid": "66160eaa-0cc2-460b-bfce-328f2e332ac4", 00:40:38.401 "assigned_rate_limits": { 00:40:38.401 "rw_ios_per_sec": 0, 00:40:38.401 "rw_mbytes_per_sec": 0, 00:40:38.401 "r_mbytes_per_sec": 0, 00:40:38.401 "w_mbytes_per_sec": 0 00:40:38.401 }, 00:40:38.401 "claimed": true, 00:40:38.401 "claim_type": "exclusive_write", 00:40:38.401 "zoned": false, 00:40:38.401 "supported_io_types": { 00:40:38.401 "read": true, 00:40:38.401 "write": true, 00:40:38.401 "unmap": true, 00:40:38.401 "flush": true, 00:40:38.401 "reset": true, 00:40:38.401 "nvme_admin": false, 00:40:38.401 "nvme_io": false, 00:40:38.401 "nvme_io_md": false, 00:40:38.401 "write_zeroes": true, 00:40:38.401 "zcopy": true, 00:40:38.401 "get_zone_info": false, 00:40:38.401 "zone_management": false, 00:40:38.401 "zone_append": false, 00:40:38.401 "compare": false, 00:40:38.401 "compare_and_write": false, 00:40:38.401 "abort": true, 00:40:38.401 "seek_hole": false, 00:40:38.401 "seek_data": false, 00:40:38.401 "copy": true, 00:40:38.660 "nvme_iov_md": false 00:40:38.660 }, 00:40:38.660 "memory_domains": [ 00:40:38.660 { 00:40:38.660 "dma_device_id": "system", 00:40:38.660 "dma_device_type": 1 00:40:38.660 }, 00:40:38.660 { 00:40:38.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:38.660 "dma_device_type": 2 00:40:38.660 } 00:40:38.660 ], 00:40:38.660 "driver_specific": {} 00:40:38.660 } 00:40:38.660 ] 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:38.660 "name": "Existed_Raid", 00:40:38.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:38.660 "strip_size_kb": 64, 00:40:38.660 "state": "configuring", 00:40:38.660 "raid_level": "raid5f", 00:40:38.660 "superblock": false, 00:40:38.660 "num_base_bdevs": 4, 00:40:38.660 "num_base_bdevs_discovered": 3, 00:40:38.660 "num_base_bdevs_operational": 4, 00:40:38.660 "base_bdevs_list": [ 00:40:38.660 { 00:40:38.660 "name": "BaseBdev1", 00:40:38.660 "uuid": "6d015460-b1c1-4aa3-a6d1-0ce1a6c36ba0", 00:40:38.660 "is_configured": true, 00:40:38.660 "data_offset": 0, 00:40:38.660 "data_size": 65536 00:40:38.660 }, 00:40:38.660 { 00:40:38.660 "name": "BaseBdev2", 00:40:38.660 "uuid": "ce8efb0b-3f19-47b0-aa22-004c7c401faf", 00:40:38.660 "is_configured": true, 00:40:38.660 "data_offset": 0, 00:40:38.660 "data_size": 65536 00:40:38.660 }, 00:40:38.660 { 00:40:38.660 "name": "BaseBdev3", 00:40:38.660 "uuid": "66160eaa-0cc2-460b-bfce-328f2e332ac4", 00:40:38.660 "is_configured": true, 00:40:38.660 "data_offset": 0, 00:40:38.660 "data_size": 65536 00:40:38.660 }, 00:40:38.660 { 00:40:38.660 "name": "BaseBdev4", 00:40:38.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:38.660 "is_configured": false, 00:40:38.660 "data_offset": 0, 00:40:38.660 "data_size": 0 00:40:38.660 } 00:40:38.660 ] 00:40:38.660 }' 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:38.660 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.955 [2024-12-09 23:22:19.475206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:40:38.955 [2024-12-09 23:22:19.475451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:40:38.955 [2024-12-09 23:22:19.475473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:40:38.955 [2024-12-09 23:22:19.475774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:40:38.955 [2024-12-09 23:22:19.483657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:40:38.955 [2024-12-09 23:22:19.483683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:40:38.955 [2024-12-09 23:22:19.484000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:38.955 BaseBdev4 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.955 [ 00:40:38.955 { 00:40:38.955 "name": "BaseBdev4", 00:40:38.955 "aliases": [ 00:40:38.955 "2049656e-88fd-4a53-9159-8f309a767582" 00:40:38.955 ], 00:40:38.955 "product_name": "Malloc disk", 00:40:38.955 "block_size": 512, 00:40:38.955 "num_blocks": 65536, 00:40:38.955 "uuid": "2049656e-88fd-4a53-9159-8f309a767582", 00:40:38.955 "assigned_rate_limits": { 00:40:38.955 "rw_ios_per_sec": 0, 00:40:38.955 "rw_mbytes_per_sec": 0, 00:40:38.955 "r_mbytes_per_sec": 0, 00:40:38.955 "w_mbytes_per_sec": 0 00:40:38.955 }, 00:40:38.955 "claimed": true, 00:40:38.955 "claim_type": "exclusive_write", 00:40:38.955 "zoned": false, 00:40:38.955 "supported_io_types": { 00:40:38.955 "read": true, 00:40:38.955 "write": true, 00:40:38.955 "unmap": true, 00:40:38.955 "flush": true, 00:40:38.955 "reset": true, 00:40:38.955 "nvme_admin": false, 00:40:38.955 "nvme_io": false, 00:40:38.955 "nvme_io_md": false, 00:40:38.955 "write_zeroes": true, 00:40:38.955 "zcopy": true, 00:40:38.955 "get_zone_info": false, 00:40:38.955 "zone_management": false, 00:40:38.955 "zone_append": false, 00:40:38.955 "compare": false, 00:40:38.955 "compare_and_write": false, 00:40:38.955 "abort": true, 00:40:38.955 "seek_hole": false, 00:40:38.955 "seek_data": false, 00:40:38.955 "copy": true, 00:40:38.955 "nvme_iov_md": false 00:40:38.955 }, 00:40:38.955 "memory_domains": [ 00:40:38.955 { 00:40:38.955 "dma_device_id": "system", 00:40:38.955 "dma_device_type": 1 00:40:38.955 }, 00:40:38.955 { 00:40:38.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:38.955 "dma_device_type": 2 00:40:38.955 } 00:40:38.955 ], 00:40:38.955 "driver_specific": {} 00:40:38.955 } 00:40:38.955 ] 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:38.955 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:38.956 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:38.956 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:38.956 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.956 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.956 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:38.956 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.956 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:38.956 "name": "Existed_Raid", 00:40:38.956 "uuid": "5688875e-c16e-4f67-bed8-d2f5c3b4f7b7", 00:40:38.956 "strip_size_kb": 64, 00:40:38.956 "state": "online", 00:40:38.956 "raid_level": "raid5f", 00:40:38.956 "superblock": false, 00:40:38.956 "num_base_bdevs": 4, 00:40:38.956 "num_base_bdevs_discovered": 4, 00:40:38.956 "num_base_bdevs_operational": 4, 00:40:38.956 "base_bdevs_list": [ 00:40:38.956 { 00:40:38.956 "name": "BaseBdev1", 00:40:38.956 "uuid": "6d015460-b1c1-4aa3-a6d1-0ce1a6c36ba0", 00:40:38.956 "is_configured": true, 00:40:38.956 "data_offset": 0, 00:40:38.956 "data_size": 65536 00:40:38.956 }, 00:40:38.956 { 00:40:38.956 "name": "BaseBdev2", 00:40:38.956 "uuid": "ce8efb0b-3f19-47b0-aa22-004c7c401faf", 00:40:38.956 "is_configured": true, 00:40:38.956 "data_offset": 0, 00:40:38.956 "data_size": 65536 00:40:38.956 }, 00:40:38.956 { 00:40:38.956 "name": "BaseBdev3", 00:40:38.956 "uuid": "66160eaa-0cc2-460b-bfce-328f2e332ac4", 00:40:38.956 "is_configured": true, 00:40:38.956 "data_offset": 0, 00:40:38.956 "data_size": 65536 00:40:38.956 }, 00:40:38.956 { 00:40:38.956 "name": "BaseBdev4", 00:40:38.956 "uuid": "2049656e-88fd-4a53-9159-8f309a767582", 00:40:38.956 "is_configured": true, 00:40:38.956 "data_offset": 0, 00:40:38.956 "data_size": 65536 00:40:38.956 } 00:40:38.956 ] 00:40:38.956 }' 00:40:38.956 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:38.956 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.524 [2024-12-09 23:22:19.928025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:39.524 "name": "Existed_Raid", 00:40:39.524 "aliases": [ 00:40:39.524 "5688875e-c16e-4f67-bed8-d2f5c3b4f7b7" 00:40:39.524 ], 00:40:39.524 "product_name": "Raid Volume", 00:40:39.524 "block_size": 512, 00:40:39.524 "num_blocks": 196608, 00:40:39.524 "uuid": "5688875e-c16e-4f67-bed8-d2f5c3b4f7b7", 00:40:39.524 "assigned_rate_limits": { 00:40:39.524 "rw_ios_per_sec": 0, 00:40:39.524 "rw_mbytes_per_sec": 0, 00:40:39.524 "r_mbytes_per_sec": 0, 00:40:39.524 "w_mbytes_per_sec": 0 00:40:39.524 }, 00:40:39.524 "claimed": false, 00:40:39.524 "zoned": false, 00:40:39.524 "supported_io_types": { 00:40:39.524 "read": true, 00:40:39.524 "write": true, 00:40:39.524 "unmap": false, 00:40:39.524 "flush": false, 00:40:39.524 "reset": true, 00:40:39.524 "nvme_admin": false, 00:40:39.524 "nvme_io": false, 00:40:39.524 "nvme_io_md": false, 00:40:39.524 "write_zeroes": true, 00:40:39.524 "zcopy": false, 00:40:39.524 "get_zone_info": false, 00:40:39.524 "zone_management": false, 00:40:39.524 "zone_append": false, 00:40:39.524 "compare": false, 00:40:39.524 "compare_and_write": false, 00:40:39.524 "abort": false, 00:40:39.524 "seek_hole": false, 00:40:39.524 "seek_data": false, 00:40:39.524 "copy": false, 00:40:39.524 "nvme_iov_md": false 00:40:39.524 }, 00:40:39.524 "driver_specific": { 00:40:39.524 "raid": { 00:40:39.524 "uuid": "5688875e-c16e-4f67-bed8-d2f5c3b4f7b7", 00:40:39.524 "strip_size_kb": 64, 00:40:39.524 "state": "online", 00:40:39.524 "raid_level": "raid5f", 00:40:39.524 "superblock": false, 00:40:39.524 "num_base_bdevs": 4, 00:40:39.524 "num_base_bdevs_discovered": 4, 00:40:39.524 "num_base_bdevs_operational": 4, 00:40:39.524 "base_bdevs_list": [ 00:40:39.524 { 00:40:39.524 "name": "BaseBdev1", 00:40:39.524 "uuid": "6d015460-b1c1-4aa3-a6d1-0ce1a6c36ba0", 00:40:39.524 "is_configured": true, 00:40:39.524 "data_offset": 0, 00:40:39.524 "data_size": 65536 00:40:39.524 }, 00:40:39.524 { 00:40:39.524 "name": "BaseBdev2", 00:40:39.524 "uuid": "ce8efb0b-3f19-47b0-aa22-004c7c401faf", 00:40:39.524 "is_configured": true, 00:40:39.524 "data_offset": 0, 00:40:39.524 "data_size": 65536 00:40:39.524 }, 00:40:39.524 { 00:40:39.524 "name": "BaseBdev3", 00:40:39.524 "uuid": "66160eaa-0cc2-460b-bfce-328f2e332ac4", 00:40:39.524 "is_configured": true, 00:40:39.524 "data_offset": 0, 00:40:39.524 "data_size": 65536 00:40:39.524 }, 00:40:39.524 { 00:40:39.524 "name": "BaseBdev4", 00:40:39.524 "uuid": "2049656e-88fd-4a53-9159-8f309a767582", 00:40:39.524 "is_configured": true, 00:40:39.524 "data_offset": 0, 00:40:39.524 "data_size": 65536 00:40:39.524 } 00:40:39.524 ] 00:40:39.524 } 00:40:39.524 } 00:40:39.524 }' 00:40:39.524 23:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:39.524 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:40:39.524 BaseBdev2 00:40:39.524 BaseBdev3 00:40:39.524 BaseBdev4' 00:40:39.524 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:39.524 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:40:39.524 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:39.524 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:40:39.524 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.525 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.784 [2024-12-09 23:22:20.247435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:39.784 "name": "Existed_Raid", 00:40:39.784 "uuid": "5688875e-c16e-4f67-bed8-d2f5c3b4f7b7", 00:40:39.784 "strip_size_kb": 64, 00:40:39.784 "state": "online", 00:40:39.784 "raid_level": "raid5f", 00:40:39.784 "superblock": false, 00:40:39.784 "num_base_bdevs": 4, 00:40:39.784 "num_base_bdevs_discovered": 3, 00:40:39.784 "num_base_bdevs_operational": 3, 00:40:39.784 "base_bdevs_list": [ 00:40:39.784 { 00:40:39.784 "name": null, 00:40:39.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:39.784 "is_configured": false, 00:40:39.784 "data_offset": 0, 00:40:39.784 "data_size": 65536 00:40:39.784 }, 00:40:39.784 { 00:40:39.784 "name": "BaseBdev2", 00:40:39.784 "uuid": "ce8efb0b-3f19-47b0-aa22-004c7c401faf", 00:40:39.784 "is_configured": true, 00:40:39.784 "data_offset": 0, 00:40:39.784 "data_size": 65536 00:40:39.784 }, 00:40:39.784 { 00:40:39.784 "name": "BaseBdev3", 00:40:39.784 "uuid": "66160eaa-0cc2-460b-bfce-328f2e332ac4", 00:40:39.784 "is_configured": true, 00:40:39.784 "data_offset": 0, 00:40:39.784 "data_size": 65536 00:40:39.784 }, 00:40:39.784 { 00:40:39.784 "name": "BaseBdev4", 00:40:39.784 "uuid": "2049656e-88fd-4a53-9159-8f309a767582", 00:40:39.784 "is_configured": true, 00:40:39.784 "data_offset": 0, 00:40:39.784 "data_size": 65536 00:40:39.784 } 00:40:39.784 ] 00:40:39.784 }' 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:39.784 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.353 [2024-12-09 23:22:20.818855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:40.353 [2024-12-09 23:22:20.819079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:40.353 [2024-12-09 23:22:20.914071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.353 23:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.353 [2024-12-09 23:22:20.966010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.612 [2024-12-09 23:22:21.116983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:40:40.612 [2024-12-09 23:22:21.117034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:40.612 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:40.613 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.613 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:40:40.613 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.613 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.873 BaseBdev2 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.873 [ 00:40:40.873 { 00:40:40.873 "name": "BaseBdev2", 00:40:40.873 "aliases": [ 00:40:40.873 "e3371291-0c20-41f1-9b28-8397e33df31f" 00:40:40.873 ], 00:40:40.873 "product_name": "Malloc disk", 00:40:40.873 "block_size": 512, 00:40:40.873 "num_blocks": 65536, 00:40:40.873 "uuid": "e3371291-0c20-41f1-9b28-8397e33df31f", 00:40:40.873 "assigned_rate_limits": { 00:40:40.873 "rw_ios_per_sec": 0, 00:40:40.873 "rw_mbytes_per_sec": 0, 00:40:40.873 "r_mbytes_per_sec": 0, 00:40:40.873 "w_mbytes_per_sec": 0 00:40:40.873 }, 00:40:40.873 "claimed": false, 00:40:40.873 "zoned": false, 00:40:40.873 "supported_io_types": { 00:40:40.873 "read": true, 00:40:40.873 "write": true, 00:40:40.873 "unmap": true, 00:40:40.873 "flush": true, 00:40:40.873 "reset": true, 00:40:40.873 "nvme_admin": false, 00:40:40.873 "nvme_io": false, 00:40:40.873 "nvme_io_md": false, 00:40:40.873 "write_zeroes": true, 00:40:40.873 "zcopy": true, 00:40:40.873 "get_zone_info": false, 00:40:40.873 "zone_management": false, 00:40:40.873 "zone_append": false, 00:40:40.873 "compare": false, 00:40:40.873 "compare_and_write": false, 00:40:40.873 "abort": true, 00:40:40.873 "seek_hole": false, 00:40:40.873 "seek_data": false, 00:40:40.873 "copy": true, 00:40:40.873 "nvme_iov_md": false 00:40:40.873 }, 00:40:40.873 "memory_domains": [ 00:40:40.873 { 00:40:40.873 "dma_device_id": "system", 00:40:40.873 "dma_device_type": 1 00:40:40.873 }, 00:40:40.873 { 00:40:40.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:40.873 "dma_device_type": 2 00:40:40.873 } 00:40:40.873 ], 00:40:40.873 "driver_specific": {} 00:40:40.873 } 00:40:40.873 ] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.873 BaseBdev3 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.873 [ 00:40:40.873 { 00:40:40.873 "name": "BaseBdev3", 00:40:40.873 "aliases": [ 00:40:40.873 "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f" 00:40:40.873 ], 00:40:40.873 "product_name": "Malloc disk", 00:40:40.873 "block_size": 512, 00:40:40.873 "num_blocks": 65536, 00:40:40.873 "uuid": "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f", 00:40:40.873 "assigned_rate_limits": { 00:40:40.873 "rw_ios_per_sec": 0, 00:40:40.873 "rw_mbytes_per_sec": 0, 00:40:40.873 "r_mbytes_per_sec": 0, 00:40:40.873 "w_mbytes_per_sec": 0 00:40:40.873 }, 00:40:40.873 "claimed": false, 00:40:40.873 "zoned": false, 00:40:40.873 "supported_io_types": { 00:40:40.873 "read": true, 00:40:40.873 "write": true, 00:40:40.873 "unmap": true, 00:40:40.873 "flush": true, 00:40:40.873 "reset": true, 00:40:40.873 "nvme_admin": false, 00:40:40.873 "nvme_io": false, 00:40:40.873 "nvme_io_md": false, 00:40:40.873 "write_zeroes": true, 00:40:40.873 "zcopy": true, 00:40:40.873 "get_zone_info": false, 00:40:40.873 "zone_management": false, 00:40:40.873 "zone_append": false, 00:40:40.873 "compare": false, 00:40:40.873 "compare_and_write": false, 00:40:40.873 "abort": true, 00:40:40.873 "seek_hole": false, 00:40:40.873 "seek_data": false, 00:40:40.873 "copy": true, 00:40:40.873 "nvme_iov_md": false 00:40:40.873 }, 00:40:40.873 "memory_domains": [ 00:40:40.873 { 00:40:40.873 "dma_device_id": "system", 00:40:40.873 "dma_device_type": 1 00:40:40.873 }, 00:40:40.873 { 00:40:40.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:40.873 "dma_device_type": 2 00:40:40.873 } 00:40:40.873 ], 00:40:40.873 "driver_specific": {} 00:40:40.873 } 00:40:40.873 ] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.873 BaseBdev4 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:40:40.873 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.874 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.133 [ 00:40:41.133 { 00:40:41.133 "name": "BaseBdev4", 00:40:41.133 "aliases": [ 00:40:41.133 "dd405048-a7cb-4058-80dd-d2e215fb94a3" 00:40:41.133 ], 00:40:41.133 "product_name": "Malloc disk", 00:40:41.133 "block_size": 512, 00:40:41.133 "num_blocks": 65536, 00:40:41.133 "uuid": "dd405048-a7cb-4058-80dd-d2e215fb94a3", 00:40:41.133 "assigned_rate_limits": { 00:40:41.133 "rw_ios_per_sec": 0, 00:40:41.133 "rw_mbytes_per_sec": 0, 00:40:41.133 "r_mbytes_per_sec": 0, 00:40:41.133 "w_mbytes_per_sec": 0 00:40:41.133 }, 00:40:41.133 "claimed": false, 00:40:41.133 "zoned": false, 00:40:41.133 "supported_io_types": { 00:40:41.133 "read": true, 00:40:41.133 "write": true, 00:40:41.133 "unmap": true, 00:40:41.133 "flush": true, 00:40:41.133 "reset": true, 00:40:41.133 "nvme_admin": false, 00:40:41.133 "nvme_io": false, 00:40:41.133 "nvme_io_md": false, 00:40:41.133 "write_zeroes": true, 00:40:41.133 "zcopy": true, 00:40:41.133 "get_zone_info": false, 00:40:41.133 "zone_management": false, 00:40:41.133 "zone_append": false, 00:40:41.133 "compare": false, 00:40:41.133 "compare_and_write": false, 00:40:41.133 "abort": true, 00:40:41.133 "seek_hole": false, 00:40:41.133 "seek_data": false, 00:40:41.134 "copy": true, 00:40:41.134 "nvme_iov_md": false 00:40:41.134 }, 00:40:41.134 "memory_domains": [ 00:40:41.134 { 00:40:41.134 "dma_device_id": "system", 00:40:41.134 "dma_device_type": 1 00:40:41.134 }, 00:40:41.134 { 00:40:41.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:41.134 "dma_device_type": 2 00:40:41.134 } 00:40:41.134 ], 00:40:41.134 "driver_specific": {} 00:40:41.134 } 00:40:41.134 ] 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.134 [2024-12-09 23:22:21.536044] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:41.134 [2024-12-09 23:22:21.536207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:41.134 [2024-12-09 23:22:21.536322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:41.134 [2024-12-09 23:22:21.538575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:41.134 [2024-12-09 23:22:21.538754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:41.134 "name": "Existed_Raid", 00:40:41.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:41.134 "strip_size_kb": 64, 00:40:41.134 "state": "configuring", 00:40:41.134 "raid_level": "raid5f", 00:40:41.134 "superblock": false, 00:40:41.134 "num_base_bdevs": 4, 00:40:41.134 "num_base_bdevs_discovered": 3, 00:40:41.134 "num_base_bdevs_operational": 4, 00:40:41.134 "base_bdevs_list": [ 00:40:41.134 { 00:40:41.134 "name": "BaseBdev1", 00:40:41.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:41.134 "is_configured": false, 00:40:41.134 "data_offset": 0, 00:40:41.134 "data_size": 0 00:40:41.134 }, 00:40:41.134 { 00:40:41.134 "name": "BaseBdev2", 00:40:41.134 "uuid": "e3371291-0c20-41f1-9b28-8397e33df31f", 00:40:41.134 "is_configured": true, 00:40:41.134 "data_offset": 0, 00:40:41.134 "data_size": 65536 00:40:41.134 }, 00:40:41.134 { 00:40:41.134 "name": "BaseBdev3", 00:40:41.134 "uuid": "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f", 00:40:41.134 "is_configured": true, 00:40:41.134 "data_offset": 0, 00:40:41.134 "data_size": 65536 00:40:41.134 }, 00:40:41.134 { 00:40:41.134 "name": "BaseBdev4", 00:40:41.134 "uuid": "dd405048-a7cb-4058-80dd-d2e215fb94a3", 00:40:41.134 "is_configured": true, 00:40:41.134 "data_offset": 0, 00:40:41.134 "data_size": 65536 00:40:41.134 } 00:40:41.134 ] 00:40:41.134 }' 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:41.134 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.394 [2024-12-09 23:22:21.919555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:41.394 "name": "Existed_Raid", 00:40:41.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:41.394 "strip_size_kb": 64, 00:40:41.394 "state": "configuring", 00:40:41.394 "raid_level": "raid5f", 00:40:41.394 "superblock": false, 00:40:41.394 "num_base_bdevs": 4, 00:40:41.394 "num_base_bdevs_discovered": 2, 00:40:41.394 "num_base_bdevs_operational": 4, 00:40:41.394 "base_bdevs_list": [ 00:40:41.394 { 00:40:41.394 "name": "BaseBdev1", 00:40:41.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:41.394 "is_configured": false, 00:40:41.394 "data_offset": 0, 00:40:41.394 "data_size": 0 00:40:41.394 }, 00:40:41.394 { 00:40:41.394 "name": null, 00:40:41.394 "uuid": "e3371291-0c20-41f1-9b28-8397e33df31f", 00:40:41.394 "is_configured": false, 00:40:41.394 "data_offset": 0, 00:40:41.394 "data_size": 65536 00:40:41.394 }, 00:40:41.394 { 00:40:41.394 "name": "BaseBdev3", 00:40:41.394 "uuid": "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f", 00:40:41.394 "is_configured": true, 00:40:41.394 "data_offset": 0, 00:40:41.394 "data_size": 65536 00:40:41.394 }, 00:40:41.394 { 00:40:41.394 "name": "BaseBdev4", 00:40:41.394 "uuid": "dd405048-a7cb-4058-80dd-d2e215fb94a3", 00:40:41.394 "is_configured": true, 00:40:41.394 "data_offset": 0, 00:40:41.394 "data_size": 65536 00:40:41.394 } 00:40:41.394 ] 00:40:41.394 }' 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:41.394 23:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.963 [2024-12-09 23:22:22.412712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:41.963 BaseBdev1 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.963 [ 00:40:41.963 { 00:40:41.963 "name": "BaseBdev1", 00:40:41.963 "aliases": [ 00:40:41.963 "3fc0bbbb-c2c2-4909-a701-8b36bf038436" 00:40:41.963 ], 00:40:41.963 "product_name": "Malloc disk", 00:40:41.963 "block_size": 512, 00:40:41.963 "num_blocks": 65536, 00:40:41.963 "uuid": "3fc0bbbb-c2c2-4909-a701-8b36bf038436", 00:40:41.963 "assigned_rate_limits": { 00:40:41.963 "rw_ios_per_sec": 0, 00:40:41.963 "rw_mbytes_per_sec": 0, 00:40:41.963 "r_mbytes_per_sec": 0, 00:40:41.963 "w_mbytes_per_sec": 0 00:40:41.963 }, 00:40:41.963 "claimed": true, 00:40:41.963 "claim_type": "exclusive_write", 00:40:41.963 "zoned": false, 00:40:41.963 "supported_io_types": { 00:40:41.963 "read": true, 00:40:41.963 "write": true, 00:40:41.963 "unmap": true, 00:40:41.963 "flush": true, 00:40:41.963 "reset": true, 00:40:41.963 "nvme_admin": false, 00:40:41.963 "nvme_io": false, 00:40:41.963 "nvme_io_md": false, 00:40:41.963 "write_zeroes": true, 00:40:41.963 "zcopy": true, 00:40:41.963 "get_zone_info": false, 00:40:41.963 "zone_management": false, 00:40:41.963 "zone_append": false, 00:40:41.963 "compare": false, 00:40:41.963 "compare_and_write": false, 00:40:41.963 "abort": true, 00:40:41.963 "seek_hole": false, 00:40:41.963 "seek_data": false, 00:40:41.963 "copy": true, 00:40:41.963 "nvme_iov_md": false 00:40:41.963 }, 00:40:41.963 "memory_domains": [ 00:40:41.963 { 00:40:41.963 "dma_device_id": "system", 00:40:41.963 "dma_device_type": 1 00:40:41.963 }, 00:40:41.963 { 00:40:41.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:41.963 "dma_device_type": 2 00:40:41.963 } 00:40:41.963 ], 00:40:41.963 "driver_specific": {} 00:40:41.963 } 00:40:41.963 ] 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:41.963 "name": "Existed_Raid", 00:40:41.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:41.963 "strip_size_kb": 64, 00:40:41.963 "state": "configuring", 00:40:41.963 "raid_level": "raid5f", 00:40:41.963 "superblock": false, 00:40:41.963 "num_base_bdevs": 4, 00:40:41.963 "num_base_bdevs_discovered": 3, 00:40:41.963 "num_base_bdevs_operational": 4, 00:40:41.963 "base_bdevs_list": [ 00:40:41.963 { 00:40:41.963 "name": "BaseBdev1", 00:40:41.963 "uuid": "3fc0bbbb-c2c2-4909-a701-8b36bf038436", 00:40:41.963 "is_configured": true, 00:40:41.963 "data_offset": 0, 00:40:41.963 "data_size": 65536 00:40:41.963 }, 00:40:41.963 { 00:40:41.963 "name": null, 00:40:41.963 "uuid": "e3371291-0c20-41f1-9b28-8397e33df31f", 00:40:41.963 "is_configured": false, 00:40:41.963 "data_offset": 0, 00:40:41.963 "data_size": 65536 00:40:41.963 }, 00:40:41.963 { 00:40:41.963 "name": "BaseBdev3", 00:40:41.963 "uuid": "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f", 00:40:41.963 "is_configured": true, 00:40:41.963 "data_offset": 0, 00:40:41.963 "data_size": 65536 00:40:41.963 }, 00:40:41.963 { 00:40:41.963 "name": "BaseBdev4", 00:40:41.963 "uuid": "dd405048-a7cb-4058-80dd-d2e215fb94a3", 00:40:41.963 "is_configured": true, 00:40:41.963 "data_offset": 0, 00:40:41.963 "data_size": 65536 00:40:41.963 } 00:40:41.963 ] 00:40:41.963 }' 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:41.963 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:42.532 [2024-12-09 23:22:22.908168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:42.532 "name": "Existed_Raid", 00:40:42.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:42.532 "strip_size_kb": 64, 00:40:42.532 "state": "configuring", 00:40:42.532 "raid_level": "raid5f", 00:40:42.532 "superblock": false, 00:40:42.532 "num_base_bdevs": 4, 00:40:42.532 "num_base_bdevs_discovered": 2, 00:40:42.532 "num_base_bdevs_operational": 4, 00:40:42.532 "base_bdevs_list": [ 00:40:42.532 { 00:40:42.532 "name": "BaseBdev1", 00:40:42.532 "uuid": "3fc0bbbb-c2c2-4909-a701-8b36bf038436", 00:40:42.532 "is_configured": true, 00:40:42.532 "data_offset": 0, 00:40:42.532 "data_size": 65536 00:40:42.532 }, 00:40:42.532 { 00:40:42.532 "name": null, 00:40:42.532 "uuid": "e3371291-0c20-41f1-9b28-8397e33df31f", 00:40:42.532 "is_configured": false, 00:40:42.532 "data_offset": 0, 00:40:42.532 "data_size": 65536 00:40:42.532 }, 00:40:42.532 { 00:40:42.532 "name": null, 00:40:42.532 "uuid": "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f", 00:40:42.532 "is_configured": false, 00:40:42.532 "data_offset": 0, 00:40:42.532 "data_size": 65536 00:40:42.532 }, 00:40:42.532 { 00:40:42.532 "name": "BaseBdev4", 00:40:42.532 "uuid": "dd405048-a7cb-4058-80dd-d2e215fb94a3", 00:40:42.532 "is_configured": true, 00:40:42.532 "data_offset": 0, 00:40:42.532 "data_size": 65536 00:40:42.532 } 00:40:42.532 ] 00:40:42.532 }' 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:42.532 23:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:42.791 [2024-12-09 23:22:23.291592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:42.791 "name": "Existed_Raid", 00:40:42.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:42.791 "strip_size_kb": 64, 00:40:42.791 "state": "configuring", 00:40:42.791 "raid_level": "raid5f", 00:40:42.791 "superblock": false, 00:40:42.791 "num_base_bdevs": 4, 00:40:42.791 "num_base_bdevs_discovered": 3, 00:40:42.791 "num_base_bdevs_operational": 4, 00:40:42.791 "base_bdevs_list": [ 00:40:42.791 { 00:40:42.791 "name": "BaseBdev1", 00:40:42.791 "uuid": "3fc0bbbb-c2c2-4909-a701-8b36bf038436", 00:40:42.791 "is_configured": true, 00:40:42.791 "data_offset": 0, 00:40:42.791 "data_size": 65536 00:40:42.791 }, 00:40:42.791 { 00:40:42.791 "name": null, 00:40:42.791 "uuid": "e3371291-0c20-41f1-9b28-8397e33df31f", 00:40:42.791 "is_configured": false, 00:40:42.791 "data_offset": 0, 00:40:42.791 "data_size": 65536 00:40:42.791 }, 00:40:42.791 { 00:40:42.791 "name": "BaseBdev3", 00:40:42.791 "uuid": "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f", 00:40:42.791 "is_configured": true, 00:40:42.791 "data_offset": 0, 00:40:42.791 "data_size": 65536 00:40:42.791 }, 00:40:42.791 { 00:40:42.791 "name": "BaseBdev4", 00:40:42.791 "uuid": "dd405048-a7cb-4058-80dd-d2e215fb94a3", 00:40:42.791 "is_configured": true, 00:40:42.791 "data_offset": 0, 00:40:42.791 "data_size": 65536 00:40:42.791 } 00:40:42.791 ] 00:40:42.791 }' 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:42.791 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.361 [2024-12-09 23:22:23.743305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:43.361 "name": "Existed_Raid", 00:40:43.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:43.361 "strip_size_kb": 64, 00:40:43.361 "state": "configuring", 00:40:43.361 "raid_level": "raid5f", 00:40:43.361 "superblock": false, 00:40:43.361 "num_base_bdevs": 4, 00:40:43.361 "num_base_bdevs_discovered": 2, 00:40:43.361 "num_base_bdevs_operational": 4, 00:40:43.361 "base_bdevs_list": [ 00:40:43.361 { 00:40:43.361 "name": null, 00:40:43.361 "uuid": "3fc0bbbb-c2c2-4909-a701-8b36bf038436", 00:40:43.361 "is_configured": false, 00:40:43.361 "data_offset": 0, 00:40:43.361 "data_size": 65536 00:40:43.361 }, 00:40:43.361 { 00:40:43.361 "name": null, 00:40:43.361 "uuid": "e3371291-0c20-41f1-9b28-8397e33df31f", 00:40:43.361 "is_configured": false, 00:40:43.361 "data_offset": 0, 00:40:43.361 "data_size": 65536 00:40:43.361 }, 00:40:43.361 { 00:40:43.361 "name": "BaseBdev3", 00:40:43.361 "uuid": "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f", 00:40:43.361 "is_configured": true, 00:40:43.361 "data_offset": 0, 00:40:43.361 "data_size": 65536 00:40:43.361 }, 00:40:43.361 { 00:40:43.361 "name": "BaseBdev4", 00:40:43.361 "uuid": "dd405048-a7cb-4058-80dd-d2e215fb94a3", 00:40:43.361 "is_configured": true, 00:40:43.361 "data_offset": 0, 00:40:43.361 "data_size": 65536 00:40:43.361 } 00:40:43.361 ] 00:40:43.361 }' 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:43.361 23:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.930 [2024-12-09 23:22:24.307700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:43.930 "name": "Existed_Raid", 00:40:43.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:43.930 "strip_size_kb": 64, 00:40:43.930 "state": "configuring", 00:40:43.930 "raid_level": "raid5f", 00:40:43.930 "superblock": false, 00:40:43.930 "num_base_bdevs": 4, 00:40:43.930 "num_base_bdevs_discovered": 3, 00:40:43.930 "num_base_bdevs_operational": 4, 00:40:43.930 "base_bdevs_list": [ 00:40:43.930 { 00:40:43.930 "name": null, 00:40:43.930 "uuid": "3fc0bbbb-c2c2-4909-a701-8b36bf038436", 00:40:43.930 "is_configured": false, 00:40:43.930 "data_offset": 0, 00:40:43.930 "data_size": 65536 00:40:43.930 }, 00:40:43.930 { 00:40:43.930 "name": "BaseBdev2", 00:40:43.930 "uuid": "e3371291-0c20-41f1-9b28-8397e33df31f", 00:40:43.930 "is_configured": true, 00:40:43.930 "data_offset": 0, 00:40:43.930 "data_size": 65536 00:40:43.930 }, 00:40:43.930 { 00:40:43.930 "name": "BaseBdev3", 00:40:43.930 "uuid": "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f", 00:40:43.930 "is_configured": true, 00:40:43.930 "data_offset": 0, 00:40:43.930 "data_size": 65536 00:40:43.930 }, 00:40:43.930 { 00:40:43.930 "name": "BaseBdev4", 00:40:43.930 "uuid": "dd405048-a7cb-4058-80dd-d2e215fb94a3", 00:40:43.930 "is_configured": true, 00:40:43.930 "data_offset": 0, 00:40:43.930 "data_size": 65536 00:40:43.930 } 00:40:43.930 ] 00:40:43.930 }' 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:43.930 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:44.188 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3fc0bbbb-c2c2-4909-a701-8b36bf038436 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:44.447 [2024-12-09 23:22:24.865944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:40:44.447 [2024-12-09 23:22:24.866006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:40:44.447 [2024-12-09 23:22:24.866015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:40:44.447 [2024-12-09 23:22:24.866289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:40:44.447 [2024-12-09 23:22:24.873780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:40:44.447 [2024-12-09 23:22:24.873920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:40:44.447 [2024-12-09 23:22:24.874203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:44.447 NewBaseBdev 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:44.447 [ 00:40:44.447 { 00:40:44.447 "name": "NewBaseBdev", 00:40:44.447 "aliases": [ 00:40:44.447 "3fc0bbbb-c2c2-4909-a701-8b36bf038436" 00:40:44.447 ], 00:40:44.447 "product_name": "Malloc disk", 00:40:44.447 "block_size": 512, 00:40:44.447 "num_blocks": 65536, 00:40:44.447 "uuid": "3fc0bbbb-c2c2-4909-a701-8b36bf038436", 00:40:44.447 "assigned_rate_limits": { 00:40:44.447 "rw_ios_per_sec": 0, 00:40:44.447 "rw_mbytes_per_sec": 0, 00:40:44.447 "r_mbytes_per_sec": 0, 00:40:44.447 "w_mbytes_per_sec": 0 00:40:44.447 }, 00:40:44.447 "claimed": true, 00:40:44.447 "claim_type": "exclusive_write", 00:40:44.447 "zoned": false, 00:40:44.447 "supported_io_types": { 00:40:44.447 "read": true, 00:40:44.447 "write": true, 00:40:44.447 "unmap": true, 00:40:44.447 "flush": true, 00:40:44.447 "reset": true, 00:40:44.447 "nvme_admin": false, 00:40:44.447 "nvme_io": false, 00:40:44.447 "nvme_io_md": false, 00:40:44.447 "write_zeroes": true, 00:40:44.447 "zcopy": true, 00:40:44.447 "get_zone_info": false, 00:40:44.447 "zone_management": false, 00:40:44.447 "zone_append": false, 00:40:44.447 "compare": false, 00:40:44.447 "compare_and_write": false, 00:40:44.447 "abort": true, 00:40:44.447 "seek_hole": false, 00:40:44.447 "seek_data": false, 00:40:44.447 "copy": true, 00:40:44.447 "nvme_iov_md": false 00:40:44.447 }, 00:40:44.447 "memory_domains": [ 00:40:44.447 { 00:40:44.447 "dma_device_id": "system", 00:40:44.447 "dma_device_type": 1 00:40:44.447 }, 00:40:44.447 { 00:40:44.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:44.447 "dma_device_type": 2 00:40:44.447 } 00:40:44.447 ], 00:40:44.447 "driver_specific": {} 00:40:44.447 } 00:40:44.447 ] 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:44.447 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:44.448 "name": "Existed_Raid", 00:40:44.448 "uuid": "9de97d88-12b0-4095-a716-d2e3007d9b85", 00:40:44.448 "strip_size_kb": 64, 00:40:44.448 "state": "online", 00:40:44.448 "raid_level": "raid5f", 00:40:44.448 "superblock": false, 00:40:44.448 "num_base_bdevs": 4, 00:40:44.448 "num_base_bdevs_discovered": 4, 00:40:44.448 "num_base_bdevs_operational": 4, 00:40:44.448 "base_bdevs_list": [ 00:40:44.448 { 00:40:44.448 "name": "NewBaseBdev", 00:40:44.448 "uuid": "3fc0bbbb-c2c2-4909-a701-8b36bf038436", 00:40:44.448 "is_configured": true, 00:40:44.448 "data_offset": 0, 00:40:44.448 "data_size": 65536 00:40:44.448 }, 00:40:44.448 { 00:40:44.448 "name": "BaseBdev2", 00:40:44.448 "uuid": "e3371291-0c20-41f1-9b28-8397e33df31f", 00:40:44.448 "is_configured": true, 00:40:44.448 "data_offset": 0, 00:40:44.448 "data_size": 65536 00:40:44.448 }, 00:40:44.448 { 00:40:44.448 "name": "BaseBdev3", 00:40:44.448 "uuid": "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f", 00:40:44.448 "is_configured": true, 00:40:44.448 "data_offset": 0, 00:40:44.448 "data_size": 65536 00:40:44.448 }, 00:40:44.448 { 00:40:44.448 "name": "BaseBdev4", 00:40:44.448 "uuid": "dd405048-a7cb-4058-80dd-d2e215fb94a3", 00:40:44.448 "is_configured": true, 00:40:44.448 "data_offset": 0, 00:40:44.448 "data_size": 65536 00:40:44.448 } 00:40:44.448 ] 00:40:44.448 }' 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:44.448 23:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:45.015 [2024-12-09 23:22:25.354802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:45.015 "name": "Existed_Raid", 00:40:45.015 "aliases": [ 00:40:45.015 "9de97d88-12b0-4095-a716-d2e3007d9b85" 00:40:45.015 ], 00:40:45.015 "product_name": "Raid Volume", 00:40:45.015 "block_size": 512, 00:40:45.015 "num_blocks": 196608, 00:40:45.015 "uuid": "9de97d88-12b0-4095-a716-d2e3007d9b85", 00:40:45.015 "assigned_rate_limits": { 00:40:45.015 "rw_ios_per_sec": 0, 00:40:45.015 "rw_mbytes_per_sec": 0, 00:40:45.015 "r_mbytes_per_sec": 0, 00:40:45.015 "w_mbytes_per_sec": 0 00:40:45.015 }, 00:40:45.015 "claimed": false, 00:40:45.015 "zoned": false, 00:40:45.015 "supported_io_types": { 00:40:45.015 "read": true, 00:40:45.015 "write": true, 00:40:45.015 "unmap": false, 00:40:45.015 "flush": false, 00:40:45.015 "reset": true, 00:40:45.015 "nvme_admin": false, 00:40:45.015 "nvme_io": false, 00:40:45.015 "nvme_io_md": false, 00:40:45.015 "write_zeroes": true, 00:40:45.015 "zcopy": false, 00:40:45.015 "get_zone_info": false, 00:40:45.015 "zone_management": false, 00:40:45.015 "zone_append": false, 00:40:45.015 "compare": false, 00:40:45.015 "compare_and_write": false, 00:40:45.015 "abort": false, 00:40:45.015 "seek_hole": false, 00:40:45.015 "seek_data": false, 00:40:45.015 "copy": false, 00:40:45.015 "nvme_iov_md": false 00:40:45.015 }, 00:40:45.015 "driver_specific": { 00:40:45.015 "raid": { 00:40:45.015 "uuid": "9de97d88-12b0-4095-a716-d2e3007d9b85", 00:40:45.015 "strip_size_kb": 64, 00:40:45.015 "state": "online", 00:40:45.015 "raid_level": "raid5f", 00:40:45.015 "superblock": false, 00:40:45.015 "num_base_bdevs": 4, 00:40:45.015 "num_base_bdevs_discovered": 4, 00:40:45.015 "num_base_bdevs_operational": 4, 00:40:45.015 "base_bdevs_list": [ 00:40:45.015 { 00:40:45.015 "name": "NewBaseBdev", 00:40:45.015 "uuid": "3fc0bbbb-c2c2-4909-a701-8b36bf038436", 00:40:45.015 "is_configured": true, 00:40:45.015 "data_offset": 0, 00:40:45.015 "data_size": 65536 00:40:45.015 }, 00:40:45.015 { 00:40:45.015 "name": "BaseBdev2", 00:40:45.015 "uuid": "e3371291-0c20-41f1-9b28-8397e33df31f", 00:40:45.015 "is_configured": true, 00:40:45.015 "data_offset": 0, 00:40:45.015 "data_size": 65536 00:40:45.015 }, 00:40:45.015 { 00:40:45.015 "name": "BaseBdev3", 00:40:45.015 "uuid": "3be9cad2-f2fe-4eb2-a6b6-3667018ae36f", 00:40:45.015 "is_configured": true, 00:40:45.015 "data_offset": 0, 00:40:45.015 "data_size": 65536 00:40:45.015 }, 00:40:45.015 { 00:40:45.015 "name": "BaseBdev4", 00:40:45.015 "uuid": "dd405048-a7cb-4058-80dd-d2e215fb94a3", 00:40:45.015 "is_configured": true, 00:40:45.015 "data_offset": 0, 00:40:45.015 "data_size": 65536 00:40:45.015 } 00:40:45.015 ] 00:40:45.015 } 00:40:45.015 } 00:40:45.015 }' 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:40:45.015 BaseBdev2 00:40:45.015 BaseBdev3 00:40:45.015 BaseBdev4' 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.015 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:45.016 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:45.274 [2024-12-09 23:22:25.654080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:45.274 [2024-12-09 23:22:25.654112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:45.274 [2024-12-09 23:22:25.654192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:45.274 [2024-12-09 23:22:25.654531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:45.274 [2024-12-09 23:22:25.654546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82655 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82655 ']' 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82655 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82655 00:40:45.274 killing process with pid 82655 00:40:45.274 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:45.275 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:45.275 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82655' 00:40:45.275 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82655 00:40:45.275 [2024-12-09 23:22:25.693914] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:45.275 23:22:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82655 00:40:45.533 [2024-12-09 23:22:26.090762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:40:46.921 00:40:46.921 real 0m11.113s 00:40:46.921 user 0m17.466s 00:40:46.921 sys 0m2.343s 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:46.921 ************************************ 00:40:46.921 END TEST raid5f_state_function_test 00:40:46.921 ************************************ 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:46.921 23:22:27 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:40:46.921 23:22:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:40:46.921 23:22:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:46.921 23:22:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:46.921 ************************************ 00:40:46.921 START TEST raid5f_state_function_test_sb 00:40:46.921 ************************************ 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83320 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83320' 00:40:46.921 Process raid pid: 83320 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83320 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83320 ']' 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:46.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:46.921 23:22:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:46.921 [2024-12-09 23:22:27.426908] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:40:46.921 [2024-12-09 23:22:27.427032] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:47.195 [2024-12-09 23:22:27.608284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:47.195 [2024-12-09 23:22:27.732007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:47.454 [2024-12-09 23:22:27.942861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:47.454 [2024-12-09 23:22:27.942914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:47.714 [2024-12-09 23:22:28.257145] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:47.714 [2024-12-09 23:22:28.257209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:47.714 [2024-12-09 23:22:28.257221] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:47.714 [2024-12-09 23:22:28.257233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:47.714 [2024-12-09 23:22:28.257241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:47.714 [2024-12-09 23:22:28.257253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:47.714 [2024-12-09 23:22:28.257260] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:40:47.714 [2024-12-09 23:22:28.257272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:47.714 "name": "Existed_Raid", 00:40:47.714 "uuid": "993c52c1-bf52-4a76-bb2c-fff81fed35c6", 00:40:47.714 "strip_size_kb": 64, 00:40:47.714 "state": "configuring", 00:40:47.714 "raid_level": "raid5f", 00:40:47.714 "superblock": true, 00:40:47.714 "num_base_bdevs": 4, 00:40:47.714 "num_base_bdevs_discovered": 0, 00:40:47.714 "num_base_bdevs_operational": 4, 00:40:47.714 "base_bdevs_list": [ 00:40:47.714 { 00:40:47.714 "name": "BaseBdev1", 00:40:47.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:47.714 "is_configured": false, 00:40:47.714 "data_offset": 0, 00:40:47.714 "data_size": 0 00:40:47.714 }, 00:40:47.714 { 00:40:47.714 "name": "BaseBdev2", 00:40:47.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:47.714 "is_configured": false, 00:40:47.714 "data_offset": 0, 00:40:47.714 "data_size": 0 00:40:47.714 }, 00:40:47.714 { 00:40:47.714 "name": "BaseBdev3", 00:40:47.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:47.714 "is_configured": false, 00:40:47.714 "data_offset": 0, 00:40:47.714 "data_size": 0 00:40:47.714 }, 00:40:47.714 { 00:40:47.714 "name": "BaseBdev4", 00:40:47.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:47.714 "is_configured": false, 00:40:47.714 "data_offset": 0, 00:40:47.714 "data_size": 0 00:40:47.714 } 00:40:47.714 ] 00:40:47.714 }' 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:47.714 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.282 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:48.282 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.282 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.283 [2024-12-09 23:22:28.720461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:48.283 [2024-12-09 23:22:28.720504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.283 [2024-12-09 23:22:28.728447] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:48.283 [2024-12-09 23:22:28.728494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:48.283 [2024-12-09 23:22:28.728505] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:48.283 [2024-12-09 23:22:28.728518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:48.283 [2024-12-09 23:22:28.728525] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:48.283 [2024-12-09 23:22:28.728537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:48.283 [2024-12-09 23:22:28.728545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:40:48.283 [2024-12-09 23:22:28.728556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.283 [2024-12-09 23:22:28.775137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:48.283 BaseBdev1 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.283 [ 00:40:48.283 { 00:40:48.283 "name": "BaseBdev1", 00:40:48.283 "aliases": [ 00:40:48.283 "0354936d-03eb-4385-ac82-003462582430" 00:40:48.283 ], 00:40:48.283 "product_name": "Malloc disk", 00:40:48.283 "block_size": 512, 00:40:48.283 "num_blocks": 65536, 00:40:48.283 "uuid": "0354936d-03eb-4385-ac82-003462582430", 00:40:48.283 "assigned_rate_limits": { 00:40:48.283 "rw_ios_per_sec": 0, 00:40:48.283 "rw_mbytes_per_sec": 0, 00:40:48.283 "r_mbytes_per_sec": 0, 00:40:48.283 "w_mbytes_per_sec": 0 00:40:48.283 }, 00:40:48.283 "claimed": true, 00:40:48.283 "claim_type": "exclusive_write", 00:40:48.283 "zoned": false, 00:40:48.283 "supported_io_types": { 00:40:48.283 "read": true, 00:40:48.283 "write": true, 00:40:48.283 "unmap": true, 00:40:48.283 "flush": true, 00:40:48.283 "reset": true, 00:40:48.283 "nvme_admin": false, 00:40:48.283 "nvme_io": false, 00:40:48.283 "nvme_io_md": false, 00:40:48.283 "write_zeroes": true, 00:40:48.283 "zcopy": true, 00:40:48.283 "get_zone_info": false, 00:40:48.283 "zone_management": false, 00:40:48.283 "zone_append": false, 00:40:48.283 "compare": false, 00:40:48.283 "compare_and_write": false, 00:40:48.283 "abort": true, 00:40:48.283 "seek_hole": false, 00:40:48.283 "seek_data": false, 00:40:48.283 "copy": true, 00:40:48.283 "nvme_iov_md": false 00:40:48.283 }, 00:40:48.283 "memory_domains": [ 00:40:48.283 { 00:40:48.283 "dma_device_id": "system", 00:40:48.283 "dma_device_type": 1 00:40:48.283 }, 00:40:48.283 { 00:40:48.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:48.283 "dma_device_type": 2 00:40:48.283 } 00:40:48.283 ], 00:40:48.283 "driver_specific": {} 00:40:48.283 } 00:40:48.283 ] 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:48.283 "name": "Existed_Raid", 00:40:48.283 "uuid": "b084231d-a13e-4ddf-b365-4235edbf4bd4", 00:40:48.283 "strip_size_kb": 64, 00:40:48.283 "state": "configuring", 00:40:48.283 "raid_level": "raid5f", 00:40:48.283 "superblock": true, 00:40:48.283 "num_base_bdevs": 4, 00:40:48.283 "num_base_bdevs_discovered": 1, 00:40:48.283 "num_base_bdevs_operational": 4, 00:40:48.283 "base_bdevs_list": [ 00:40:48.283 { 00:40:48.283 "name": "BaseBdev1", 00:40:48.283 "uuid": "0354936d-03eb-4385-ac82-003462582430", 00:40:48.283 "is_configured": true, 00:40:48.283 "data_offset": 2048, 00:40:48.283 "data_size": 63488 00:40:48.283 }, 00:40:48.283 { 00:40:48.283 "name": "BaseBdev2", 00:40:48.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:48.283 "is_configured": false, 00:40:48.283 "data_offset": 0, 00:40:48.283 "data_size": 0 00:40:48.283 }, 00:40:48.283 { 00:40:48.283 "name": "BaseBdev3", 00:40:48.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:48.283 "is_configured": false, 00:40:48.283 "data_offset": 0, 00:40:48.283 "data_size": 0 00:40:48.283 }, 00:40:48.283 { 00:40:48.283 "name": "BaseBdev4", 00:40:48.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:48.283 "is_configured": false, 00:40:48.283 "data_offset": 0, 00:40:48.283 "data_size": 0 00:40:48.283 } 00:40:48.283 ] 00:40:48.283 }' 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:48.283 23:22:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.851 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:48.851 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.851 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.851 [2024-12-09 23:22:29.262559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:48.851 [2024-12-09 23:22:29.262620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:40:48.851 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.852 [2024-12-09 23:22:29.270627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:48.852 [2024-12-09 23:22:29.272688] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:48.852 [2024-12-09 23:22:29.272735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:48.852 [2024-12-09 23:22:29.272746] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:48.852 [2024-12-09 23:22:29.272761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:48.852 [2024-12-09 23:22:29.272769] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:40:48.852 [2024-12-09 23:22:29.272780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:48.852 "name": "Existed_Raid", 00:40:48.852 "uuid": "c41ae815-1aa2-4503-a08d-24c4af638dbc", 00:40:48.852 "strip_size_kb": 64, 00:40:48.852 "state": "configuring", 00:40:48.852 "raid_level": "raid5f", 00:40:48.852 "superblock": true, 00:40:48.852 "num_base_bdevs": 4, 00:40:48.852 "num_base_bdevs_discovered": 1, 00:40:48.852 "num_base_bdevs_operational": 4, 00:40:48.852 "base_bdevs_list": [ 00:40:48.852 { 00:40:48.852 "name": "BaseBdev1", 00:40:48.852 "uuid": "0354936d-03eb-4385-ac82-003462582430", 00:40:48.852 "is_configured": true, 00:40:48.852 "data_offset": 2048, 00:40:48.852 "data_size": 63488 00:40:48.852 }, 00:40:48.852 { 00:40:48.852 "name": "BaseBdev2", 00:40:48.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:48.852 "is_configured": false, 00:40:48.852 "data_offset": 0, 00:40:48.852 "data_size": 0 00:40:48.852 }, 00:40:48.852 { 00:40:48.852 "name": "BaseBdev3", 00:40:48.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:48.852 "is_configured": false, 00:40:48.852 "data_offset": 0, 00:40:48.852 "data_size": 0 00:40:48.852 }, 00:40:48.852 { 00:40:48.852 "name": "BaseBdev4", 00:40:48.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:48.852 "is_configured": false, 00:40:48.852 "data_offset": 0, 00:40:48.852 "data_size": 0 00:40:48.852 } 00:40:48.852 ] 00:40:48.852 }' 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:48.852 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.110 [2024-12-09 23:22:29.725808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:49.110 BaseBdev2 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:49.110 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.111 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.369 [ 00:40:49.369 { 00:40:49.369 "name": "BaseBdev2", 00:40:49.369 "aliases": [ 00:40:49.369 "df5734a8-f42c-4358-8c75-db71ac1c12f4" 00:40:49.369 ], 00:40:49.369 "product_name": "Malloc disk", 00:40:49.369 "block_size": 512, 00:40:49.369 "num_blocks": 65536, 00:40:49.369 "uuid": "df5734a8-f42c-4358-8c75-db71ac1c12f4", 00:40:49.369 "assigned_rate_limits": { 00:40:49.369 "rw_ios_per_sec": 0, 00:40:49.369 "rw_mbytes_per_sec": 0, 00:40:49.369 "r_mbytes_per_sec": 0, 00:40:49.369 "w_mbytes_per_sec": 0 00:40:49.369 }, 00:40:49.369 "claimed": true, 00:40:49.369 "claim_type": "exclusive_write", 00:40:49.369 "zoned": false, 00:40:49.369 "supported_io_types": { 00:40:49.369 "read": true, 00:40:49.369 "write": true, 00:40:49.369 "unmap": true, 00:40:49.369 "flush": true, 00:40:49.369 "reset": true, 00:40:49.369 "nvme_admin": false, 00:40:49.369 "nvme_io": false, 00:40:49.369 "nvme_io_md": false, 00:40:49.369 "write_zeroes": true, 00:40:49.369 "zcopy": true, 00:40:49.369 "get_zone_info": false, 00:40:49.369 "zone_management": false, 00:40:49.369 "zone_append": false, 00:40:49.369 "compare": false, 00:40:49.369 "compare_and_write": false, 00:40:49.369 "abort": true, 00:40:49.369 "seek_hole": false, 00:40:49.369 "seek_data": false, 00:40:49.369 "copy": true, 00:40:49.369 "nvme_iov_md": false 00:40:49.369 }, 00:40:49.369 "memory_domains": [ 00:40:49.369 { 00:40:49.369 "dma_device_id": "system", 00:40:49.369 "dma_device_type": 1 00:40:49.369 }, 00:40:49.369 { 00:40:49.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:49.369 "dma_device_type": 2 00:40:49.369 } 00:40:49.369 ], 00:40:49.369 "driver_specific": {} 00:40:49.369 } 00:40:49.369 ] 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.369 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:49.369 "name": "Existed_Raid", 00:40:49.369 "uuid": "c41ae815-1aa2-4503-a08d-24c4af638dbc", 00:40:49.369 "strip_size_kb": 64, 00:40:49.369 "state": "configuring", 00:40:49.369 "raid_level": "raid5f", 00:40:49.369 "superblock": true, 00:40:49.369 "num_base_bdevs": 4, 00:40:49.369 "num_base_bdevs_discovered": 2, 00:40:49.369 "num_base_bdevs_operational": 4, 00:40:49.369 "base_bdevs_list": [ 00:40:49.369 { 00:40:49.369 "name": "BaseBdev1", 00:40:49.369 "uuid": "0354936d-03eb-4385-ac82-003462582430", 00:40:49.369 "is_configured": true, 00:40:49.369 "data_offset": 2048, 00:40:49.369 "data_size": 63488 00:40:49.369 }, 00:40:49.369 { 00:40:49.369 "name": "BaseBdev2", 00:40:49.369 "uuid": "df5734a8-f42c-4358-8c75-db71ac1c12f4", 00:40:49.369 "is_configured": true, 00:40:49.369 "data_offset": 2048, 00:40:49.369 "data_size": 63488 00:40:49.369 }, 00:40:49.369 { 00:40:49.369 "name": "BaseBdev3", 00:40:49.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:49.369 "is_configured": false, 00:40:49.369 "data_offset": 0, 00:40:49.369 "data_size": 0 00:40:49.369 }, 00:40:49.369 { 00:40:49.369 "name": "BaseBdev4", 00:40:49.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:49.369 "is_configured": false, 00:40:49.370 "data_offset": 0, 00:40:49.370 "data_size": 0 00:40:49.370 } 00:40:49.370 ] 00:40:49.370 }' 00:40:49.370 23:22:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:49.370 23:22:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.628 [2024-12-09 23:22:30.232031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:49.628 BaseBdev3 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.628 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.628 [ 00:40:49.628 { 00:40:49.628 "name": "BaseBdev3", 00:40:49.628 "aliases": [ 00:40:49.628 "465a61e2-0ec0-4113-a04f-9e695ebe2561" 00:40:49.628 ], 00:40:49.628 "product_name": "Malloc disk", 00:40:49.628 "block_size": 512, 00:40:49.628 "num_blocks": 65536, 00:40:49.628 "uuid": "465a61e2-0ec0-4113-a04f-9e695ebe2561", 00:40:49.628 "assigned_rate_limits": { 00:40:49.628 "rw_ios_per_sec": 0, 00:40:49.628 "rw_mbytes_per_sec": 0, 00:40:49.628 "r_mbytes_per_sec": 0, 00:40:49.628 "w_mbytes_per_sec": 0 00:40:49.887 }, 00:40:49.887 "claimed": true, 00:40:49.887 "claim_type": "exclusive_write", 00:40:49.887 "zoned": false, 00:40:49.887 "supported_io_types": { 00:40:49.887 "read": true, 00:40:49.887 "write": true, 00:40:49.887 "unmap": true, 00:40:49.887 "flush": true, 00:40:49.887 "reset": true, 00:40:49.887 "nvme_admin": false, 00:40:49.887 "nvme_io": false, 00:40:49.887 "nvme_io_md": false, 00:40:49.887 "write_zeroes": true, 00:40:49.887 "zcopy": true, 00:40:49.887 "get_zone_info": false, 00:40:49.887 "zone_management": false, 00:40:49.887 "zone_append": false, 00:40:49.887 "compare": false, 00:40:49.887 "compare_and_write": false, 00:40:49.887 "abort": true, 00:40:49.887 "seek_hole": false, 00:40:49.887 "seek_data": false, 00:40:49.887 "copy": true, 00:40:49.887 "nvme_iov_md": false 00:40:49.887 }, 00:40:49.887 "memory_domains": [ 00:40:49.887 { 00:40:49.887 "dma_device_id": "system", 00:40:49.887 "dma_device_type": 1 00:40:49.887 }, 00:40:49.887 { 00:40:49.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:49.887 "dma_device_type": 2 00:40:49.887 } 00:40:49.887 ], 00:40:49.887 "driver_specific": {} 00:40:49.887 } 00:40:49.887 ] 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:49.887 "name": "Existed_Raid", 00:40:49.887 "uuid": "c41ae815-1aa2-4503-a08d-24c4af638dbc", 00:40:49.887 "strip_size_kb": 64, 00:40:49.887 "state": "configuring", 00:40:49.887 "raid_level": "raid5f", 00:40:49.887 "superblock": true, 00:40:49.887 "num_base_bdevs": 4, 00:40:49.887 "num_base_bdevs_discovered": 3, 00:40:49.887 "num_base_bdevs_operational": 4, 00:40:49.887 "base_bdevs_list": [ 00:40:49.887 { 00:40:49.887 "name": "BaseBdev1", 00:40:49.887 "uuid": "0354936d-03eb-4385-ac82-003462582430", 00:40:49.887 "is_configured": true, 00:40:49.887 "data_offset": 2048, 00:40:49.887 "data_size": 63488 00:40:49.887 }, 00:40:49.887 { 00:40:49.887 "name": "BaseBdev2", 00:40:49.887 "uuid": "df5734a8-f42c-4358-8c75-db71ac1c12f4", 00:40:49.887 "is_configured": true, 00:40:49.887 "data_offset": 2048, 00:40:49.887 "data_size": 63488 00:40:49.887 }, 00:40:49.887 { 00:40:49.887 "name": "BaseBdev3", 00:40:49.887 "uuid": "465a61e2-0ec0-4113-a04f-9e695ebe2561", 00:40:49.887 "is_configured": true, 00:40:49.887 "data_offset": 2048, 00:40:49.887 "data_size": 63488 00:40:49.887 }, 00:40:49.887 { 00:40:49.887 "name": "BaseBdev4", 00:40:49.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:49.887 "is_configured": false, 00:40:49.887 "data_offset": 0, 00:40:49.887 "data_size": 0 00:40:49.887 } 00:40:49.887 ] 00:40:49.887 }' 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:49.887 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.146 [2024-12-09 23:22:30.735516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:40:50.146 [2024-12-09 23:22:30.735915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:40:50.146 [2024-12-09 23:22:30.735936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:40:50.146 BaseBdev4 00:40:50.146 [2024-12-09 23:22:30.736310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.146 [2024-12-09 23:22:30.744545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:40:50.146 [2024-12-09 23:22:30.744754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:40:50.146 [2024-12-09 23:22:30.745217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.146 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.146 [ 00:40:50.146 { 00:40:50.146 "name": "BaseBdev4", 00:40:50.146 "aliases": [ 00:40:50.146 "00c5eb04-7c07-435c-813b-5425d7aa9d12" 00:40:50.146 ], 00:40:50.146 "product_name": "Malloc disk", 00:40:50.146 "block_size": 512, 00:40:50.146 "num_blocks": 65536, 00:40:50.146 "uuid": "00c5eb04-7c07-435c-813b-5425d7aa9d12", 00:40:50.146 "assigned_rate_limits": { 00:40:50.146 "rw_ios_per_sec": 0, 00:40:50.146 "rw_mbytes_per_sec": 0, 00:40:50.146 "r_mbytes_per_sec": 0, 00:40:50.146 "w_mbytes_per_sec": 0 00:40:50.146 }, 00:40:50.146 "claimed": true, 00:40:50.146 "claim_type": "exclusive_write", 00:40:50.146 "zoned": false, 00:40:50.146 "supported_io_types": { 00:40:50.146 "read": true, 00:40:50.146 "write": true, 00:40:50.146 "unmap": true, 00:40:50.146 "flush": true, 00:40:50.146 "reset": true, 00:40:50.146 "nvme_admin": false, 00:40:50.146 "nvme_io": false, 00:40:50.405 "nvme_io_md": false, 00:40:50.405 "write_zeroes": true, 00:40:50.405 "zcopy": true, 00:40:50.405 "get_zone_info": false, 00:40:50.405 "zone_management": false, 00:40:50.405 "zone_append": false, 00:40:50.405 "compare": false, 00:40:50.405 "compare_and_write": false, 00:40:50.405 "abort": true, 00:40:50.405 "seek_hole": false, 00:40:50.405 "seek_data": false, 00:40:50.405 "copy": true, 00:40:50.405 "nvme_iov_md": false 00:40:50.405 }, 00:40:50.405 "memory_domains": [ 00:40:50.405 { 00:40:50.405 "dma_device_id": "system", 00:40:50.405 "dma_device_type": 1 00:40:50.405 }, 00:40:50.405 { 00:40:50.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:50.405 "dma_device_type": 2 00:40:50.405 } 00:40:50.405 ], 00:40:50.405 "driver_specific": {} 00:40:50.405 } 00:40:50.405 ] 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:50.405 "name": "Existed_Raid", 00:40:50.405 "uuid": "c41ae815-1aa2-4503-a08d-24c4af638dbc", 00:40:50.405 "strip_size_kb": 64, 00:40:50.405 "state": "online", 00:40:50.405 "raid_level": "raid5f", 00:40:50.405 "superblock": true, 00:40:50.405 "num_base_bdevs": 4, 00:40:50.405 "num_base_bdevs_discovered": 4, 00:40:50.405 "num_base_bdevs_operational": 4, 00:40:50.405 "base_bdevs_list": [ 00:40:50.405 { 00:40:50.405 "name": "BaseBdev1", 00:40:50.405 "uuid": "0354936d-03eb-4385-ac82-003462582430", 00:40:50.405 "is_configured": true, 00:40:50.405 "data_offset": 2048, 00:40:50.405 "data_size": 63488 00:40:50.405 }, 00:40:50.405 { 00:40:50.405 "name": "BaseBdev2", 00:40:50.405 "uuid": "df5734a8-f42c-4358-8c75-db71ac1c12f4", 00:40:50.405 "is_configured": true, 00:40:50.405 "data_offset": 2048, 00:40:50.405 "data_size": 63488 00:40:50.405 }, 00:40:50.405 { 00:40:50.405 "name": "BaseBdev3", 00:40:50.405 "uuid": "465a61e2-0ec0-4113-a04f-9e695ebe2561", 00:40:50.405 "is_configured": true, 00:40:50.405 "data_offset": 2048, 00:40:50.405 "data_size": 63488 00:40:50.405 }, 00:40:50.405 { 00:40:50.405 "name": "BaseBdev4", 00:40:50.405 "uuid": "00c5eb04-7c07-435c-813b-5425d7aa9d12", 00:40:50.405 "is_configured": true, 00:40:50.405 "data_offset": 2048, 00:40:50.405 "data_size": 63488 00:40:50.405 } 00:40:50.405 ] 00:40:50.405 }' 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:50.405 23:22:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.664 [2024-12-09 23:22:31.262916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:50.664 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:50.924 "name": "Existed_Raid", 00:40:50.924 "aliases": [ 00:40:50.924 "c41ae815-1aa2-4503-a08d-24c4af638dbc" 00:40:50.924 ], 00:40:50.924 "product_name": "Raid Volume", 00:40:50.924 "block_size": 512, 00:40:50.924 "num_blocks": 190464, 00:40:50.924 "uuid": "c41ae815-1aa2-4503-a08d-24c4af638dbc", 00:40:50.924 "assigned_rate_limits": { 00:40:50.924 "rw_ios_per_sec": 0, 00:40:50.924 "rw_mbytes_per_sec": 0, 00:40:50.924 "r_mbytes_per_sec": 0, 00:40:50.924 "w_mbytes_per_sec": 0 00:40:50.924 }, 00:40:50.924 "claimed": false, 00:40:50.924 "zoned": false, 00:40:50.924 "supported_io_types": { 00:40:50.924 "read": true, 00:40:50.924 "write": true, 00:40:50.924 "unmap": false, 00:40:50.924 "flush": false, 00:40:50.924 "reset": true, 00:40:50.924 "nvme_admin": false, 00:40:50.924 "nvme_io": false, 00:40:50.924 "nvme_io_md": false, 00:40:50.924 "write_zeroes": true, 00:40:50.924 "zcopy": false, 00:40:50.924 "get_zone_info": false, 00:40:50.924 "zone_management": false, 00:40:50.924 "zone_append": false, 00:40:50.924 "compare": false, 00:40:50.924 "compare_and_write": false, 00:40:50.924 "abort": false, 00:40:50.924 "seek_hole": false, 00:40:50.924 "seek_data": false, 00:40:50.924 "copy": false, 00:40:50.924 "nvme_iov_md": false 00:40:50.924 }, 00:40:50.924 "driver_specific": { 00:40:50.924 "raid": { 00:40:50.924 "uuid": "c41ae815-1aa2-4503-a08d-24c4af638dbc", 00:40:50.924 "strip_size_kb": 64, 00:40:50.924 "state": "online", 00:40:50.924 "raid_level": "raid5f", 00:40:50.924 "superblock": true, 00:40:50.924 "num_base_bdevs": 4, 00:40:50.924 "num_base_bdevs_discovered": 4, 00:40:50.924 "num_base_bdevs_operational": 4, 00:40:50.924 "base_bdevs_list": [ 00:40:50.924 { 00:40:50.924 "name": "BaseBdev1", 00:40:50.924 "uuid": "0354936d-03eb-4385-ac82-003462582430", 00:40:50.924 "is_configured": true, 00:40:50.924 "data_offset": 2048, 00:40:50.924 "data_size": 63488 00:40:50.924 }, 00:40:50.924 { 00:40:50.924 "name": "BaseBdev2", 00:40:50.924 "uuid": "df5734a8-f42c-4358-8c75-db71ac1c12f4", 00:40:50.924 "is_configured": true, 00:40:50.924 "data_offset": 2048, 00:40:50.924 "data_size": 63488 00:40:50.924 }, 00:40:50.924 { 00:40:50.924 "name": "BaseBdev3", 00:40:50.924 "uuid": "465a61e2-0ec0-4113-a04f-9e695ebe2561", 00:40:50.924 "is_configured": true, 00:40:50.924 "data_offset": 2048, 00:40:50.924 "data_size": 63488 00:40:50.924 }, 00:40:50.924 { 00:40:50.924 "name": "BaseBdev4", 00:40:50.924 "uuid": "00c5eb04-7c07-435c-813b-5425d7aa9d12", 00:40:50.924 "is_configured": true, 00:40:50.924 "data_offset": 2048, 00:40:50.924 "data_size": 63488 00:40:50.924 } 00:40:50.924 ] 00:40:50.924 } 00:40:50.924 } 00:40:50.924 }' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:40:50.924 BaseBdev2 00:40:50.924 BaseBdev3 00:40:50.924 BaseBdev4' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:50.924 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:51.184 [2024-12-09 23:22:31.602674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.184 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:51.185 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.185 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:51.185 "name": "Existed_Raid", 00:40:51.185 "uuid": "c41ae815-1aa2-4503-a08d-24c4af638dbc", 00:40:51.185 "strip_size_kb": 64, 00:40:51.185 "state": "online", 00:40:51.185 "raid_level": "raid5f", 00:40:51.185 "superblock": true, 00:40:51.185 "num_base_bdevs": 4, 00:40:51.185 "num_base_bdevs_discovered": 3, 00:40:51.185 "num_base_bdevs_operational": 3, 00:40:51.185 "base_bdevs_list": [ 00:40:51.185 { 00:40:51.185 "name": null, 00:40:51.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:51.185 "is_configured": false, 00:40:51.185 "data_offset": 0, 00:40:51.185 "data_size": 63488 00:40:51.185 }, 00:40:51.185 { 00:40:51.185 "name": "BaseBdev2", 00:40:51.185 "uuid": "df5734a8-f42c-4358-8c75-db71ac1c12f4", 00:40:51.185 "is_configured": true, 00:40:51.185 "data_offset": 2048, 00:40:51.185 "data_size": 63488 00:40:51.185 }, 00:40:51.185 { 00:40:51.185 "name": "BaseBdev3", 00:40:51.185 "uuid": "465a61e2-0ec0-4113-a04f-9e695ebe2561", 00:40:51.185 "is_configured": true, 00:40:51.185 "data_offset": 2048, 00:40:51.185 "data_size": 63488 00:40:51.185 }, 00:40:51.185 { 00:40:51.185 "name": "BaseBdev4", 00:40:51.185 "uuid": "00c5eb04-7c07-435c-813b-5425d7aa9d12", 00:40:51.185 "is_configured": true, 00:40:51.185 "data_offset": 2048, 00:40:51.185 "data_size": 63488 00:40:51.185 } 00:40:51.185 ] 00:40:51.185 }' 00:40:51.185 23:22:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:51.185 23:22:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:51.752 [2024-12-09 23:22:32.191480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:51.752 [2024-12-09 23:22:32.191703] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:51.752 [2024-12-09 23:22:32.305315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.752 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:51.752 [2024-12-09 23:22:32.353250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:40:52.011 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.012 [2024-12-09 23:22:32.526697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:40:52.012 [2024-12-09 23:22:32.526769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:40:52.012 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.270 BaseBdev2 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.270 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.270 [ 00:40:52.270 { 00:40:52.270 "name": "BaseBdev2", 00:40:52.270 "aliases": [ 00:40:52.270 "bd7f6799-b1c7-45bb-a45e-bc56c179beeb" 00:40:52.270 ], 00:40:52.270 "product_name": "Malloc disk", 00:40:52.270 "block_size": 512, 00:40:52.270 "num_blocks": 65536, 00:40:52.270 "uuid": "bd7f6799-b1c7-45bb-a45e-bc56c179beeb", 00:40:52.270 "assigned_rate_limits": { 00:40:52.270 "rw_ios_per_sec": 0, 00:40:52.270 "rw_mbytes_per_sec": 0, 00:40:52.270 "r_mbytes_per_sec": 0, 00:40:52.270 "w_mbytes_per_sec": 0 00:40:52.270 }, 00:40:52.270 "claimed": false, 00:40:52.270 "zoned": false, 00:40:52.270 "supported_io_types": { 00:40:52.270 "read": true, 00:40:52.270 "write": true, 00:40:52.270 "unmap": true, 00:40:52.270 "flush": true, 00:40:52.270 "reset": true, 00:40:52.270 "nvme_admin": false, 00:40:52.270 "nvme_io": false, 00:40:52.270 "nvme_io_md": false, 00:40:52.270 "write_zeroes": true, 00:40:52.270 "zcopy": true, 00:40:52.270 "get_zone_info": false, 00:40:52.270 "zone_management": false, 00:40:52.270 "zone_append": false, 00:40:52.270 "compare": false, 00:40:52.270 "compare_and_write": false, 00:40:52.270 "abort": true, 00:40:52.270 "seek_hole": false, 00:40:52.270 "seek_data": false, 00:40:52.270 "copy": true, 00:40:52.270 "nvme_iov_md": false 00:40:52.270 }, 00:40:52.270 "memory_domains": [ 00:40:52.270 { 00:40:52.270 "dma_device_id": "system", 00:40:52.270 "dma_device_type": 1 00:40:52.270 }, 00:40:52.270 { 00:40:52.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:52.270 "dma_device_type": 2 00:40:52.270 } 00:40:52.270 ], 00:40:52.271 "driver_specific": {} 00:40:52.271 } 00:40:52.271 ] 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.271 BaseBdev3 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.271 [ 00:40:52.271 { 00:40:52.271 "name": "BaseBdev3", 00:40:52.271 "aliases": [ 00:40:52.271 "9e2c29e7-b10f-480a-9b3f-5c664d0429ee" 00:40:52.271 ], 00:40:52.271 "product_name": "Malloc disk", 00:40:52.271 "block_size": 512, 00:40:52.271 "num_blocks": 65536, 00:40:52.271 "uuid": "9e2c29e7-b10f-480a-9b3f-5c664d0429ee", 00:40:52.271 "assigned_rate_limits": { 00:40:52.271 "rw_ios_per_sec": 0, 00:40:52.271 "rw_mbytes_per_sec": 0, 00:40:52.271 "r_mbytes_per_sec": 0, 00:40:52.271 "w_mbytes_per_sec": 0 00:40:52.271 }, 00:40:52.271 "claimed": false, 00:40:52.271 "zoned": false, 00:40:52.271 "supported_io_types": { 00:40:52.271 "read": true, 00:40:52.271 "write": true, 00:40:52.271 "unmap": true, 00:40:52.271 "flush": true, 00:40:52.271 "reset": true, 00:40:52.271 "nvme_admin": false, 00:40:52.271 "nvme_io": false, 00:40:52.271 "nvme_io_md": false, 00:40:52.271 "write_zeroes": true, 00:40:52.271 "zcopy": true, 00:40:52.271 "get_zone_info": false, 00:40:52.271 "zone_management": false, 00:40:52.271 "zone_append": false, 00:40:52.271 "compare": false, 00:40:52.271 "compare_and_write": false, 00:40:52.271 "abort": true, 00:40:52.271 "seek_hole": false, 00:40:52.271 "seek_data": false, 00:40:52.271 "copy": true, 00:40:52.271 "nvme_iov_md": false 00:40:52.271 }, 00:40:52.271 "memory_domains": [ 00:40:52.271 { 00:40:52.271 "dma_device_id": "system", 00:40:52.271 "dma_device_type": 1 00:40:52.271 }, 00:40:52.271 { 00:40:52.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:52.271 "dma_device_type": 2 00:40:52.271 } 00:40:52.271 ], 00:40:52.271 "driver_specific": {} 00:40:52.271 } 00:40:52.271 ] 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.271 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.529 BaseBdev4 00:40:52.529 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.529 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:40:52.529 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:40:52.529 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.530 [ 00:40:52.530 { 00:40:52.530 "name": "BaseBdev4", 00:40:52.530 "aliases": [ 00:40:52.530 "2cfd15ba-2679-4066-ae79-356f9b3551ef" 00:40:52.530 ], 00:40:52.530 "product_name": "Malloc disk", 00:40:52.530 "block_size": 512, 00:40:52.530 "num_blocks": 65536, 00:40:52.530 "uuid": "2cfd15ba-2679-4066-ae79-356f9b3551ef", 00:40:52.530 "assigned_rate_limits": { 00:40:52.530 "rw_ios_per_sec": 0, 00:40:52.530 "rw_mbytes_per_sec": 0, 00:40:52.530 "r_mbytes_per_sec": 0, 00:40:52.530 "w_mbytes_per_sec": 0 00:40:52.530 }, 00:40:52.530 "claimed": false, 00:40:52.530 "zoned": false, 00:40:52.530 "supported_io_types": { 00:40:52.530 "read": true, 00:40:52.530 "write": true, 00:40:52.530 "unmap": true, 00:40:52.530 "flush": true, 00:40:52.530 "reset": true, 00:40:52.530 "nvme_admin": false, 00:40:52.530 "nvme_io": false, 00:40:52.530 "nvme_io_md": false, 00:40:52.530 "write_zeroes": true, 00:40:52.530 "zcopy": true, 00:40:52.530 "get_zone_info": false, 00:40:52.530 "zone_management": false, 00:40:52.530 "zone_append": false, 00:40:52.530 "compare": false, 00:40:52.530 "compare_and_write": false, 00:40:52.530 "abort": true, 00:40:52.530 "seek_hole": false, 00:40:52.530 "seek_data": false, 00:40:52.530 "copy": true, 00:40:52.530 "nvme_iov_md": false 00:40:52.530 }, 00:40:52.530 "memory_domains": [ 00:40:52.530 { 00:40:52.530 "dma_device_id": "system", 00:40:52.530 "dma_device_type": 1 00:40:52.530 }, 00:40:52.530 { 00:40:52.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:52.530 "dma_device_type": 2 00:40:52.530 } 00:40:52.530 ], 00:40:52.530 "driver_specific": {} 00:40:52.530 } 00:40:52.530 ] 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.530 23:22:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.530 [2024-12-09 23:22:32.998901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:52.530 [2024-12-09 23:22:32.999101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:52.530 [2024-12-09 23:22:32.999223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:52.530 [2024-12-09 23:22:33.002247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:52.530 [2024-12-09 23:22:33.002471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:52.530 "name": "Existed_Raid", 00:40:52.530 "uuid": "f08aa057-6878-49c6-9c08-85028b161dbf", 00:40:52.530 "strip_size_kb": 64, 00:40:52.530 "state": "configuring", 00:40:52.530 "raid_level": "raid5f", 00:40:52.530 "superblock": true, 00:40:52.530 "num_base_bdevs": 4, 00:40:52.530 "num_base_bdevs_discovered": 3, 00:40:52.530 "num_base_bdevs_operational": 4, 00:40:52.530 "base_bdevs_list": [ 00:40:52.530 { 00:40:52.530 "name": "BaseBdev1", 00:40:52.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:52.530 "is_configured": false, 00:40:52.530 "data_offset": 0, 00:40:52.530 "data_size": 0 00:40:52.530 }, 00:40:52.530 { 00:40:52.530 "name": "BaseBdev2", 00:40:52.530 "uuid": "bd7f6799-b1c7-45bb-a45e-bc56c179beeb", 00:40:52.530 "is_configured": true, 00:40:52.530 "data_offset": 2048, 00:40:52.530 "data_size": 63488 00:40:52.530 }, 00:40:52.530 { 00:40:52.530 "name": "BaseBdev3", 00:40:52.530 "uuid": "9e2c29e7-b10f-480a-9b3f-5c664d0429ee", 00:40:52.530 "is_configured": true, 00:40:52.530 "data_offset": 2048, 00:40:52.530 "data_size": 63488 00:40:52.530 }, 00:40:52.530 { 00:40:52.530 "name": "BaseBdev4", 00:40:52.530 "uuid": "2cfd15ba-2679-4066-ae79-356f9b3551ef", 00:40:52.530 "is_configured": true, 00:40:52.530 "data_offset": 2048, 00:40:52.530 "data_size": 63488 00:40:52.530 } 00:40:52.530 ] 00:40:52.530 }' 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:52.530 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.097 [2024-12-09 23:22:33.478694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:53.097 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:53.098 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:53.098 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:53.098 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.098 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.098 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.098 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:53.098 "name": "Existed_Raid", 00:40:53.098 "uuid": "f08aa057-6878-49c6-9c08-85028b161dbf", 00:40:53.098 "strip_size_kb": 64, 00:40:53.098 "state": "configuring", 00:40:53.098 "raid_level": "raid5f", 00:40:53.098 "superblock": true, 00:40:53.098 "num_base_bdevs": 4, 00:40:53.098 "num_base_bdevs_discovered": 2, 00:40:53.098 "num_base_bdevs_operational": 4, 00:40:53.098 "base_bdevs_list": [ 00:40:53.098 { 00:40:53.098 "name": "BaseBdev1", 00:40:53.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:53.098 "is_configured": false, 00:40:53.098 "data_offset": 0, 00:40:53.098 "data_size": 0 00:40:53.098 }, 00:40:53.098 { 00:40:53.098 "name": null, 00:40:53.098 "uuid": "bd7f6799-b1c7-45bb-a45e-bc56c179beeb", 00:40:53.098 "is_configured": false, 00:40:53.098 "data_offset": 0, 00:40:53.098 "data_size": 63488 00:40:53.098 }, 00:40:53.098 { 00:40:53.098 "name": "BaseBdev3", 00:40:53.098 "uuid": "9e2c29e7-b10f-480a-9b3f-5c664d0429ee", 00:40:53.098 "is_configured": true, 00:40:53.098 "data_offset": 2048, 00:40:53.098 "data_size": 63488 00:40:53.098 }, 00:40:53.098 { 00:40:53.098 "name": "BaseBdev4", 00:40:53.098 "uuid": "2cfd15ba-2679-4066-ae79-356f9b3551ef", 00:40:53.098 "is_configured": true, 00:40:53.098 "data_offset": 2048, 00:40:53.098 "data_size": 63488 00:40:53.098 } 00:40:53.098 ] 00:40:53.098 }' 00:40:53.098 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:53.098 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.357 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:53.357 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:40:53.357 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.357 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.357 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.616 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:40:53.616 23:22:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:40:53.616 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.616 23:22:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.616 [2024-12-09 23:22:34.041437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:53.616 BaseBdev1 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.616 [ 00:40:53.616 { 00:40:53.616 "name": "BaseBdev1", 00:40:53.616 "aliases": [ 00:40:53.616 "490872dd-a556-49a7-8266-9213c99015e8" 00:40:53.616 ], 00:40:53.616 "product_name": "Malloc disk", 00:40:53.616 "block_size": 512, 00:40:53.616 "num_blocks": 65536, 00:40:53.616 "uuid": "490872dd-a556-49a7-8266-9213c99015e8", 00:40:53.616 "assigned_rate_limits": { 00:40:53.616 "rw_ios_per_sec": 0, 00:40:53.616 "rw_mbytes_per_sec": 0, 00:40:53.616 "r_mbytes_per_sec": 0, 00:40:53.616 "w_mbytes_per_sec": 0 00:40:53.616 }, 00:40:53.616 "claimed": true, 00:40:53.616 "claim_type": "exclusive_write", 00:40:53.616 "zoned": false, 00:40:53.616 "supported_io_types": { 00:40:53.616 "read": true, 00:40:53.616 "write": true, 00:40:53.616 "unmap": true, 00:40:53.616 "flush": true, 00:40:53.616 "reset": true, 00:40:53.616 "nvme_admin": false, 00:40:53.616 "nvme_io": false, 00:40:53.616 "nvme_io_md": false, 00:40:53.616 "write_zeroes": true, 00:40:53.616 "zcopy": true, 00:40:53.616 "get_zone_info": false, 00:40:53.616 "zone_management": false, 00:40:53.616 "zone_append": false, 00:40:53.616 "compare": false, 00:40:53.616 "compare_and_write": false, 00:40:53.616 "abort": true, 00:40:53.616 "seek_hole": false, 00:40:53.616 "seek_data": false, 00:40:53.616 "copy": true, 00:40:53.616 "nvme_iov_md": false 00:40:53.616 }, 00:40:53.616 "memory_domains": [ 00:40:53.616 { 00:40:53.616 "dma_device_id": "system", 00:40:53.616 "dma_device_type": 1 00:40:53.616 }, 00:40:53.616 { 00:40:53.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:53.616 "dma_device_type": 2 00:40:53.616 } 00:40:53.616 ], 00:40:53.616 "driver_specific": {} 00:40:53.616 } 00:40:53.616 ] 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:53.616 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:53.617 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:53.617 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:53.617 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.617 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:53.617 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.617 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:53.617 "name": "Existed_Raid", 00:40:53.617 "uuid": "f08aa057-6878-49c6-9c08-85028b161dbf", 00:40:53.617 "strip_size_kb": 64, 00:40:53.617 "state": "configuring", 00:40:53.617 "raid_level": "raid5f", 00:40:53.617 "superblock": true, 00:40:53.617 "num_base_bdevs": 4, 00:40:53.617 "num_base_bdevs_discovered": 3, 00:40:53.617 "num_base_bdevs_operational": 4, 00:40:53.617 "base_bdevs_list": [ 00:40:53.617 { 00:40:53.617 "name": "BaseBdev1", 00:40:53.617 "uuid": "490872dd-a556-49a7-8266-9213c99015e8", 00:40:53.617 "is_configured": true, 00:40:53.617 "data_offset": 2048, 00:40:53.617 "data_size": 63488 00:40:53.617 }, 00:40:53.617 { 00:40:53.617 "name": null, 00:40:53.617 "uuid": "bd7f6799-b1c7-45bb-a45e-bc56c179beeb", 00:40:53.617 "is_configured": false, 00:40:53.617 "data_offset": 0, 00:40:53.617 "data_size": 63488 00:40:53.617 }, 00:40:53.617 { 00:40:53.617 "name": "BaseBdev3", 00:40:53.617 "uuid": "9e2c29e7-b10f-480a-9b3f-5c664d0429ee", 00:40:53.617 "is_configured": true, 00:40:53.617 "data_offset": 2048, 00:40:53.617 "data_size": 63488 00:40:53.617 }, 00:40:53.617 { 00:40:53.617 "name": "BaseBdev4", 00:40:53.617 "uuid": "2cfd15ba-2679-4066-ae79-356f9b3551ef", 00:40:53.617 "is_configured": true, 00:40:53.617 "data_offset": 2048, 00:40:53.617 "data_size": 63488 00:40:53.617 } 00:40:53.617 ] 00:40:53.617 }' 00:40:53.617 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:53.617 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.185 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:40:54.185 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:54.185 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.185 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.185 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 [2024-12-09 23:22:34.580807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:54.186 "name": "Existed_Raid", 00:40:54.186 "uuid": "f08aa057-6878-49c6-9c08-85028b161dbf", 00:40:54.186 "strip_size_kb": 64, 00:40:54.186 "state": "configuring", 00:40:54.186 "raid_level": "raid5f", 00:40:54.186 "superblock": true, 00:40:54.186 "num_base_bdevs": 4, 00:40:54.186 "num_base_bdevs_discovered": 2, 00:40:54.186 "num_base_bdevs_operational": 4, 00:40:54.186 "base_bdevs_list": [ 00:40:54.186 { 00:40:54.186 "name": "BaseBdev1", 00:40:54.186 "uuid": "490872dd-a556-49a7-8266-9213c99015e8", 00:40:54.186 "is_configured": true, 00:40:54.186 "data_offset": 2048, 00:40:54.186 "data_size": 63488 00:40:54.186 }, 00:40:54.186 { 00:40:54.186 "name": null, 00:40:54.186 "uuid": "bd7f6799-b1c7-45bb-a45e-bc56c179beeb", 00:40:54.186 "is_configured": false, 00:40:54.186 "data_offset": 0, 00:40:54.186 "data_size": 63488 00:40:54.186 }, 00:40:54.186 { 00:40:54.186 "name": null, 00:40:54.186 "uuid": "9e2c29e7-b10f-480a-9b3f-5c664d0429ee", 00:40:54.186 "is_configured": false, 00:40:54.186 "data_offset": 0, 00:40:54.186 "data_size": 63488 00:40:54.186 }, 00:40:54.186 { 00:40:54.186 "name": "BaseBdev4", 00:40:54.186 "uuid": "2cfd15ba-2679-4066-ae79-356f9b3551ef", 00:40:54.186 "is_configured": true, 00:40:54.186 "data_offset": 2048, 00:40:54.186 "data_size": 63488 00:40:54.186 } 00:40:54.186 ] 00:40:54.186 }' 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:54.186 23:22:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.445 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:40:54.445 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:54.445 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.445 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.445 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.445 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:40:54.445 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:40:54.445 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.445 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.445 [2024-12-09 23:22:35.056644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:54.445 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.446 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:54.704 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.704 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:54.704 "name": "Existed_Raid", 00:40:54.704 "uuid": "f08aa057-6878-49c6-9c08-85028b161dbf", 00:40:54.704 "strip_size_kb": 64, 00:40:54.704 "state": "configuring", 00:40:54.704 "raid_level": "raid5f", 00:40:54.704 "superblock": true, 00:40:54.704 "num_base_bdevs": 4, 00:40:54.704 "num_base_bdevs_discovered": 3, 00:40:54.704 "num_base_bdevs_operational": 4, 00:40:54.704 "base_bdevs_list": [ 00:40:54.704 { 00:40:54.704 "name": "BaseBdev1", 00:40:54.704 "uuid": "490872dd-a556-49a7-8266-9213c99015e8", 00:40:54.704 "is_configured": true, 00:40:54.704 "data_offset": 2048, 00:40:54.704 "data_size": 63488 00:40:54.704 }, 00:40:54.704 { 00:40:54.704 "name": null, 00:40:54.704 "uuid": "bd7f6799-b1c7-45bb-a45e-bc56c179beeb", 00:40:54.704 "is_configured": false, 00:40:54.704 "data_offset": 0, 00:40:54.704 "data_size": 63488 00:40:54.704 }, 00:40:54.704 { 00:40:54.704 "name": "BaseBdev3", 00:40:54.704 "uuid": "9e2c29e7-b10f-480a-9b3f-5c664d0429ee", 00:40:54.704 "is_configured": true, 00:40:54.704 "data_offset": 2048, 00:40:54.704 "data_size": 63488 00:40:54.704 }, 00:40:54.704 { 00:40:54.704 "name": "BaseBdev4", 00:40:54.704 "uuid": "2cfd15ba-2679-4066-ae79-356f9b3551ef", 00:40:54.704 "is_configured": true, 00:40:54.704 "data_offset": 2048, 00:40:54.704 "data_size": 63488 00:40:54.704 } 00:40:54.704 ] 00:40:54.704 }' 00:40:54.704 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:54.704 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.963 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:54.963 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:40:54.963 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.963 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.963 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.963 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:40:54.963 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:40:54.963 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.963 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.963 [2024-12-09 23:22:35.528026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:55.222 "name": "Existed_Raid", 00:40:55.222 "uuid": "f08aa057-6878-49c6-9c08-85028b161dbf", 00:40:55.222 "strip_size_kb": 64, 00:40:55.222 "state": "configuring", 00:40:55.222 "raid_level": "raid5f", 00:40:55.222 "superblock": true, 00:40:55.222 "num_base_bdevs": 4, 00:40:55.222 "num_base_bdevs_discovered": 2, 00:40:55.222 "num_base_bdevs_operational": 4, 00:40:55.222 "base_bdevs_list": [ 00:40:55.222 { 00:40:55.222 "name": null, 00:40:55.222 "uuid": "490872dd-a556-49a7-8266-9213c99015e8", 00:40:55.222 "is_configured": false, 00:40:55.222 "data_offset": 0, 00:40:55.222 "data_size": 63488 00:40:55.222 }, 00:40:55.222 { 00:40:55.222 "name": null, 00:40:55.222 "uuid": "bd7f6799-b1c7-45bb-a45e-bc56c179beeb", 00:40:55.222 "is_configured": false, 00:40:55.222 "data_offset": 0, 00:40:55.222 "data_size": 63488 00:40:55.222 }, 00:40:55.222 { 00:40:55.222 "name": "BaseBdev3", 00:40:55.222 "uuid": "9e2c29e7-b10f-480a-9b3f-5c664d0429ee", 00:40:55.222 "is_configured": true, 00:40:55.222 "data_offset": 2048, 00:40:55.222 "data_size": 63488 00:40:55.222 }, 00:40:55.222 { 00:40:55.222 "name": "BaseBdev4", 00:40:55.222 "uuid": "2cfd15ba-2679-4066-ae79-356f9b3551ef", 00:40:55.222 "is_configured": true, 00:40:55.222 "data_offset": 2048, 00:40:55.222 "data_size": 63488 00:40:55.222 } 00:40:55.222 ] 00:40:55.222 }' 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:55.222 23:22:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.482 [2024-12-09 23:22:36.089686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.482 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.741 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.741 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:55.741 "name": "Existed_Raid", 00:40:55.741 "uuid": "f08aa057-6878-49c6-9c08-85028b161dbf", 00:40:55.741 "strip_size_kb": 64, 00:40:55.741 "state": "configuring", 00:40:55.741 "raid_level": "raid5f", 00:40:55.741 "superblock": true, 00:40:55.741 "num_base_bdevs": 4, 00:40:55.741 "num_base_bdevs_discovered": 3, 00:40:55.741 "num_base_bdevs_operational": 4, 00:40:55.741 "base_bdevs_list": [ 00:40:55.741 { 00:40:55.741 "name": null, 00:40:55.741 "uuid": "490872dd-a556-49a7-8266-9213c99015e8", 00:40:55.741 "is_configured": false, 00:40:55.741 "data_offset": 0, 00:40:55.741 "data_size": 63488 00:40:55.741 }, 00:40:55.741 { 00:40:55.741 "name": "BaseBdev2", 00:40:55.741 "uuid": "bd7f6799-b1c7-45bb-a45e-bc56c179beeb", 00:40:55.741 "is_configured": true, 00:40:55.741 "data_offset": 2048, 00:40:55.741 "data_size": 63488 00:40:55.741 }, 00:40:55.741 { 00:40:55.741 "name": "BaseBdev3", 00:40:55.741 "uuid": "9e2c29e7-b10f-480a-9b3f-5c664d0429ee", 00:40:55.741 "is_configured": true, 00:40:55.741 "data_offset": 2048, 00:40:55.741 "data_size": 63488 00:40:55.741 }, 00:40:55.741 { 00:40:55.741 "name": "BaseBdev4", 00:40:55.741 "uuid": "2cfd15ba-2679-4066-ae79-356f9b3551ef", 00:40:55.741 "is_configured": true, 00:40:55.741 "data_offset": 2048, 00:40:55.741 "data_size": 63488 00:40:55.741 } 00:40:55.741 ] 00:40:55.741 }' 00:40:55.741 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:55.741 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 490872dd-a556-49a7-8266-9213c99015e8 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.000 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.259 [2024-12-09 23:22:36.662939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:40:56.259 [2024-12-09 23:22:36.663266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:40:56.259 [2024-12-09 23:22:36.663285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:40:56.259 NewBaseBdev 00:40:56.259 [2024-12-09 23:22:36.663654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:40:56.259 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.259 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:40:56.259 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:40:56.259 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:56.259 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:56.259 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:56.259 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:56.259 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:56.259 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.260 [2024-12-09 23:22:36.671505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:40:56.260 [2024-12-09 23:22:36.671533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:40:56.260 [2024-12-09 23:22:36.671832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.260 [ 00:40:56.260 { 00:40:56.260 "name": "NewBaseBdev", 00:40:56.260 "aliases": [ 00:40:56.260 "490872dd-a556-49a7-8266-9213c99015e8" 00:40:56.260 ], 00:40:56.260 "product_name": "Malloc disk", 00:40:56.260 "block_size": 512, 00:40:56.260 "num_blocks": 65536, 00:40:56.260 "uuid": "490872dd-a556-49a7-8266-9213c99015e8", 00:40:56.260 "assigned_rate_limits": { 00:40:56.260 "rw_ios_per_sec": 0, 00:40:56.260 "rw_mbytes_per_sec": 0, 00:40:56.260 "r_mbytes_per_sec": 0, 00:40:56.260 "w_mbytes_per_sec": 0 00:40:56.260 }, 00:40:56.260 "claimed": true, 00:40:56.260 "claim_type": "exclusive_write", 00:40:56.260 "zoned": false, 00:40:56.260 "supported_io_types": { 00:40:56.260 "read": true, 00:40:56.260 "write": true, 00:40:56.260 "unmap": true, 00:40:56.260 "flush": true, 00:40:56.260 "reset": true, 00:40:56.260 "nvme_admin": false, 00:40:56.260 "nvme_io": false, 00:40:56.260 "nvme_io_md": false, 00:40:56.260 "write_zeroes": true, 00:40:56.260 "zcopy": true, 00:40:56.260 "get_zone_info": false, 00:40:56.260 "zone_management": false, 00:40:56.260 "zone_append": false, 00:40:56.260 "compare": false, 00:40:56.260 "compare_and_write": false, 00:40:56.260 "abort": true, 00:40:56.260 "seek_hole": false, 00:40:56.260 "seek_data": false, 00:40:56.260 "copy": true, 00:40:56.260 "nvme_iov_md": false 00:40:56.260 }, 00:40:56.260 "memory_domains": [ 00:40:56.260 { 00:40:56.260 "dma_device_id": "system", 00:40:56.260 "dma_device_type": 1 00:40:56.260 }, 00:40:56.260 { 00:40:56.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:56.260 "dma_device_type": 2 00:40:56.260 } 00:40:56.260 ], 00:40:56.260 "driver_specific": {} 00:40:56.260 } 00:40:56.260 ] 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:56.260 "name": "Existed_Raid", 00:40:56.260 "uuid": "f08aa057-6878-49c6-9c08-85028b161dbf", 00:40:56.260 "strip_size_kb": 64, 00:40:56.260 "state": "online", 00:40:56.260 "raid_level": "raid5f", 00:40:56.260 "superblock": true, 00:40:56.260 "num_base_bdevs": 4, 00:40:56.260 "num_base_bdevs_discovered": 4, 00:40:56.260 "num_base_bdevs_operational": 4, 00:40:56.260 "base_bdevs_list": [ 00:40:56.260 { 00:40:56.260 "name": "NewBaseBdev", 00:40:56.260 "uuid": "490872dd-a556-49a7-8266-9213c99015e8", 00:40:56.260 "is_configured": true, 00:40:56.260 "data_offset": 2048, 00:40:56.260 "data_size": 63488 00:40:56.260 }, 00:40:56.260 { 00:40:56.260 "name": "BaseBdev2", 00:40:56.260 "uuid": "bd7f6799-b1c7-45bb-a45e-bc56c179beeb", 00:40:56.260 "is_configured": true, 00:40:56.260 "data_offset": 2048, 00:40:56.260 "data_size": 63488 00:40:56.260 }, 00:40:56.260 { 00:40:56.260 "name": "BaseBdev3", 00:40:56.260 "uuid": "9e2c29e7-b10f-480a-9b3f-5c664d0429ee", 00:40:56.260 "is_configured": true, 00:40:56.260 "data_offset": 2048, 00:40:56.260 "data_size": 63488 00:40:56.260 }, 00:40:56.260 { 00:40:56.260 "name": "BaseBdev4", 00:40:56.260 "uuid": "2cfd15ba-2679-4066-ae79-356f9b3551ef", 00:40:56.260 "is_configured": true, 00:40:56.260 "data_offset": 2048, 00:40:56.260 "data_size": 63488 00:40:56.260 } 00:40:56.260 ] 00:40:56.260 }' 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:56.260 23:22:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.519 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:40:56.519 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:40:56.519 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:56.519 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:56.519 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:40:56.519 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:56.519 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:40:56.519 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:56.519 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.519 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.519 [2024-12-09 23:22:37.129050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:56.778 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.778 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:56.778 "name": "Existed_Raid", 00:40:56.778 "aliases": [ 00:40:56.778 "f08aa057-6878-49c6-9c08-85028b161dbf" 00:40:56.778 ], 00:40:56.778 "product_name": "Raid Volume", 00:40:56.778 "block_size": 512, 00:40:56.778 "num_blocks": 190464, 00:40:56.778 "uuid": "f08aa057-6878-49c6-9c08-85028b161dbf", 00:40:56.778 "assigned_rate_limits": { 00:40:56.778 "rw_ios_per_sec": 0, 00:40:56.778 "rw_mbytes_per_sec": 0, 00:40:56.778 "r_mbytes_per_sec": 0, 00:40:56.778 "w_mbytes_per_sec": 0 00:40:56.778 }, 00:40:56.778 "claimed": false, 00:40:56.778 "zoned": false, 00:40:56.778 "supported_io_types": { 00:40:56.778 "read": true, 00:40:56.778 "write": true, 00:40:56.778 "unmap": false, 00:40:56.778 "flush": false, 00:40:56.778 "reset": true, 00:40:56.778 "nvme_admin": false, 00:40:56.778 "nvme_io": false, 00:40:56.778 "nvme_io_md": false, 00:40:56.778 "write_zeroes": true, 00:40:56.778 "zcopy": false, 00:40:56.778 "get_zone_info": false, 00:40:56.778 "zone_management": false, 00:40:56.778 "zone_append": false, 00:40:56.778 "compare": false, 00:40:56.778 "compare_and_write": false, 00:40:56.778 "abort": false, 00:40:56.778 "seek_hole": false, 00:40:56.778 "seek_data": false, 00:40:56.778 "copy": false, 00:40:56.778 "nvme_iov_md": false 00:40:56.778 }, 00:40:56.778 "driver_specific": { 00:40:56.778 "raid": { 00:40:56.778 "uuid": "f08aa057-6878-49c6-9c08-85028b161dbf", 00:40:56.778 "strip_size_kb": 64, 00:40:56.778 "state": "online", 00:40:56.778 "raid_level": "raid5f", 00:40:56.778 "superblock": true, 00:40:56.778 "num_base_bdevs": 4, 00:40:56.778 "num_base_bdevs_discovered": 4, 00:40:56.778 "num_base_bdevs_operational": 4, 00:40:56.778 "base_bdevs_list": [ 00:40:56.778 { 00:40:56.778 "name": "NewBaseBdev", 00:40:56.778 "uuid": "490872dd-a556-49a7-8266-9213c99015e8", 00:40:56.778 "is_configured": true, 00:40:56.778 "data_offset": 2048, 00:40:56.778 "data_size": 63488 00:40:56.778 }, 00:40:56.778 { 00:40:56.778 "name": "BaseBdev2", 00:40:56.778 "uuid": "bd7f6799-b1c7-45bb-a45e-bc56c179beeb", 00:40:56.778 "is_configured": true, 00:40:56.778 "data_offset": 2048, 00:40:56.778 "data_size": 63488 00:40:56.778 }, 00:40:56.778 { 00:40:56.778 "name": "BaseBdev3", 00:40:56.778 "uuid": "9e2c29e7-b10f-480a-9b3f-5c664d0429ee", 00:40:56.778 "is_configured": true, 00:40:56.778 "data_offset": 2048, 00:40:56.778 "data_size": 63488 00:40:56.778 }, 00:40:56.778 { 00:40:56.778 "name": "BaseBdev4", 00:40:56.778 "uuid": "2cfd15ba-2679-4066-ae79-356f9b3551ef", 00:40:56.778 "is_configured": true, 00:40:56.778 "data_offset": 2048, 00:40:56.778 "data_size": 63488 00:40:56.778 } 00:40:56.778 ] 00:40:56.778 } 00:40:56.778 } 00:40:56.778 }' 00:40:56.778 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:56.778 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:40:56.778 BaseBdev2 00:40:56.778 BaseBdev3 00:40:56.778 BaseBdev4' 00:40:56.778 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.779 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:57.038 [2024-12-09 23:22:37.448541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:57.038 [2024-12-09 23:22:37.448581] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:57.038 [2024-12-09 23:22:37.448682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:57.038 [2024-12-09 23:22:37.449037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:57.038 [2024-12-09 23:22:37.449053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83320 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83320 ']' 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83320 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83320 00:40:57.038 killing process with pid 83320 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83320' 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83320 00:40:57.038 [2024-12-09 23:22:37.501002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:57.038 23:22:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83320 00:40:57.606 [2024-12-09 23:22:37.936041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:58.983 ************************************ 00:40:58.983 END TEST raid5f_state_function_test_sb 00:40:58.983 ************************************ 00:40:58.983 23:22:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:40:58.983 00:40:58.983 real 0m11.868s 00:40:58.983 user 0m18.674s 00:40:58.983 sys 0m2.458s 00:40:58.983 23:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:58.983 23:22:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:58.983 23:22:39 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:40:58.983 23:22:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:58.983 23:22:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:58.983 23:22:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:58.983 ************************************ 00:40:58.983 START TEST raid5f_superblock_test 00:40:58.983 ************************************ 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83995 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83995 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83995 ']' 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:58.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:58.983 23:22:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:58.983 [2024-12-09 23:22:39.380019] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:40:58.983 [2024-12-09 23:22:39.380336] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83995 ] 00:40:58.983 [2024-12-09 23:22:39.549768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.242 [2024-12-09 23:22:39.673071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:59.500 [2024-12-09 23:22:39.891734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:59.501 [2024-12-09 23:22:39.891771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:59.762 malloc1 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:59.762 [2024-12-09 23:22:40.286078] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:59.762 [2024-12-09 23:22:40.286157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:59.762 [2024-12-09 23:22:40.286183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:40:59.762 [2024-12-09 23:22:40.286195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:59.762 [2024-12-09 23:22:40.288682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:59.762 [2024-12-09 23:22:40.288862] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:59.762 pt1 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:59.762 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:59.763 malloc2 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:59.763 [2024-12-09 23:22:40.343246] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:59.763 [2024-12-09 23:22:40.343325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:59.763 [2024-12-09 23:22:40.343354] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:40:59.763 [2024-12-09 23:22:40.343366] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:59.763 [2024-12-09 23:22:40.345795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:59.763 [2024-12-09 23:22:40.345839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:59.763 pt2 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.763 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.022 malloc3 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.022 [2024-12-09 23:22:40.414790] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:41:00.022 [2024-12-09 23:22:40.414857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:00.022 [2024-12-09 23:22:40.414883] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:41:00.022 [2024-12-09 23:22:40.414896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:00.022 [2024-12-09 23:22:40.417270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:00.022 [2024-12-09 23:22:40.417312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:41:00.022 pt3 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.022 malloc4 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.022 [2024-12-09 23:22:40.472725] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:41:00.022 [2024-12-09 23:22:40.472968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:00.022 [2024-12-09 23:22:40.473028] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:41:00.022 [2024-12-09 23:22:40.473117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:00.022 [2024-12-09 23:22:40.475556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:00.022 [2024-12-09 23:22:40.475701] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:41:00.022 pt4 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.022 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.022 [2024-12-09 23:22:40.484742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:00.022 [2024-12-09 23:22:40.486923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:00.022 [2024-12-09 23:22:40.487012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:41:00.023 [2024-12-09 23:22:40.487056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:41:00.023 [2024-12-09 23:22:40.487248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:41:00.023 [2024-12-09 23:22:40.487266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:00.023 [2024-12-09 23:22:40.487543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:41:00.023 [2024-12-09 23:22:40.495324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:41:00.023 [2024-12-09 23:22:40.495350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:41:00.023 [2024-12-09 23:22:40.495577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:00.023 "name": "raid_bdev1", 00:41:00.023 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:00.023 "strip_size_kb": 64, 00:41:00.023 "state": "online", 00:41:00.023 "raid_level": "raid5f", 00:41:00.023 "superblock": true, 00:41:00.023 "num_base_bdevs": 4, 00:41:00.023 "num_base_bdevs_discovered": 4, 00:41:00.023 "num_base_bdevs_operational": 4, 00:41:00.023 "base_bdevs_list": [ 00:41:00.023 { 00:41:00.023 "name": "pt1", 00:41:00.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:00.023 "is_configured": true, 00:41:00.023 "data_offset": 2048, 00:41:00.023 "data_size": 63488 00:41:00.023 }, 00:41:00.023 { 00:41:00.023 "name": "pt2", 00:41:00.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:00.023 "is_configured": true, 00:41:00.023 "data_offset": 2048, 00:41:00.023 "data_size": 63488 00:41:00.023 }, 00:41:00.023 { 00:41:00.023 "name": "pt3", 00:41:00.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:00.023 "is_configured": true, 00:41:00.023 "data_offset": 2048, 00:41:00.023 "data_size": 63488 00:41:00.023 }, 00:41:00.023 { 00:41:00.023 "name": "pt4", 00:41:00.023 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:00.023 "is_configured": true, 00:41:00.023 "data_offset": 2048, 00:41:00.023 "data_size": 63488 00:41:00.023 } 00:41:00.023 ] 00:41:00.023 }' 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:00.023 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.591 [2024-12-09 23:22:40.963274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.591 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:00.591 "name": "raid_bdev1", 00:41:00.591 "aliases": [ 00:41:00.591 "ab6a9081-c785-4331-828b-d913d9ccb179" 00:41:00.591 ], 00:41:00.591 "product_name": "Raid Volume", 00:41:00.591 "block_size": 512, 00:41:00.591 "num_blocks": 190464, 00:41:00.591 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:00.591 "assigned_rate_limits": { 00:41:00.591 "rw_ios_per_sec": 0, 00:41:00.591 "rw_mbytes_per_sec": 0, 00:41:00.591 "r_mbytes_per_sec": 0, 00:41:00.591 "w_mbytes_per_sec": 0 00:41:00.591 }, 00:41:00.591 "claimed": false, 00:41:00.591 "zoned": false, 00:41:00.591 "supported_io_types": { 00:41:00.591 "read": true, 00:41:00.591 "write": true, 00:41:00.591 "unmap": false, 00:41:00.591 "flush": false, 00:41:00.591 "reset": true, 00:41:00.591 "nvme_admin": false, 00:41:00.591 "nvme_io": false, 00:41:00.591 "nvme_io_md": false, 00:41:00.591 "write_zeroes": true, 00:41:00.591 "zcopy": false, 00:41:00.591 "get_zone_info": false, 00:41:00.591 "zone_management": false, 00:41:00.591 "zone_append": false, 00:41:00.591 "compare": false, 00:41:00.591 "compare_and_write": false, 00:41:00.591 "abort": false, 00:41:00.591 "seek_hole": false, 00:41:00.591 "seek_data": false, 00:41:00.591 "copy": false, 00:41:00.591 "nvme_iov_md": false 00:41:00.591 }, 00:41:00.591 "driver_specific": { 00:41:00.591 "raid": { 00:41:00.591 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:00.591 "strip_size_kb": 64, 00:41:00.591 "state": "online", 00:41:00.591 "raid_level": "raid5f", 00:41:00.592 "superblock": true, 00:41:00.592 "num_base_bdevs": 4, 00:41:00.592 "num_base_bdevs_discovered": 4, 00:41:00.592 "num_base_bdevs_operational": 4, 00:41:00.592 "base_bdevs_list": [ 00:41:00.592 { 00:41:00.592 "name": "pt1", 00:41:00.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:00.592 "is_configured": true, 00:41:00.592 "data_offset": 2048, 00:41:00.592 "data_size": 63488 00:41:00.592 }, 00:41:00.592 { 00:41:00.592 "name": "pt2", 00:41:00.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:00.592 "is_configured": true, 00:41:00.592 "data_offset": 2048, 00:41:00.592 "data_size": 63488 00:41:00.592 }, 00:41:00.592 { 00:41:00.592 "name": "pt3", 00:41:00.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:00.592 "is_configured": true, 00:41:00.592 "data_offset": 2048, 00:41:00.592 "data_size": 63488 00:41:00.592 }, 00:41:00.592 { 00:41:00.592 "name": "pt4", 00:41:00.592 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:00.592 "is_configured": true, 00:41:00.592 "data_offset": 2048, 00:41:00.592 "data_size": 63488 00:41:00.592 } 00:41:00.592 ] 00:41:00.592 } 00:41:00.592 } 00:41:00.592 }' 00:41:00.592 23:22:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:41:00.592 pt2 00:41:00.592 pt3 00:41:00.592 pt4' 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.592 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.852 [2024-12-09 23:22:41.294804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab6a9081-c785-4331-828b-d913d9ccb179 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ab6a9081-c785-4331-828b-d913d9ccb179 ']' 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.852 [2024-12-09 23:22:41.334566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:00.852 [2024-12-09 23:22:41.334600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:00.852 [2024-12-09 23:22:41.334684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:00.852 [2024-12-09 23:22:41.334770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:00.852 [2024-12-09 23:22:41.334788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:00.852 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.112 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.112 [2024-12-09 23:22:41.502600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:41:01.112 [2024-12-09 23:22:41.504859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:41:01.112 [2024-12-09 23:22:41.505021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:41:01.112 [2024-12-09 23:22:41.505093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:41:01.112 [2024-12-09 23:22:41.505239] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:41:01.112 [2024-12-09 23:22:41.505346] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:41:01.112 [2024-12-09 23:22:41.505513] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:41:01.112 [2024-12-09 23:22:41.505633] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:41:01.112 [2024-12-09 23:22:41.505750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:01.112 [2024-12-09 23:22:41.505791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:41:01.112 request: 00:41:01.112 { 00:41:01.112 "name": "raid_bdev1", 00:41:01.112 "raid_level": "raid5f", 00:41:01.112 "base_bdevs": [ 00:41:01.112 "malloc1", 00:41:01.112 "malloc2", 00:41:01.112 "malloc3", 00:41:01.112 "malloc4" 00:41:01.112 ], 00:41:01.112 "strip_size_kb": 64, 00:41:01.112 "superblock": false, 00:41:01.113 "method": "bdev_raid_create", 00:41:01.113 "req_id": 1 00:41:01.113 } 00:41:01.113 Got JSON-RPC error response 00:41:01.113 response: 00:41:01.113 { 00:41:01.113 "code": -17, 00:41:01.113 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:41:01.113 } 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.113 [2024-12-09 23:22:41.566570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:01.113 [2024-12-09 23:22:41.566810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:01.113 [2024-12-09 23:22:41.566839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:41:01.113 [2024-12-09 23:22:41.566854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:01.113 [2024-12-09 23:22:41.569328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:01.113 [2024-12-09 23:22:41.569378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:01.113 [2024-12-09 23:22:41.569485] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:41:01.113 [2024-12-09 23:22:41.569546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:01.113 pt1 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:01.113 "name": "raid_bdev1", 00:41:01.113 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:01.113 "strip_size_kb": 64, 00:41:01.113 "state": "configuring", 00:41:01.113 "raid_level": "raid5f", 00:41:01.113 "superblock": true, 00:41:01.113 "num_base_bdevs": 4, 00:41:01.113 "num_base_bdevs_discovered": 1, 00:41:01.113 "num_base_bdevs_operational": 4, 00:41:01.113 "base_bdevs_list": [ 00:41:01.113 { 00:41:01.113 "name": "pt1", 00:41:01.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:01.113 "is_configured": true, 00:41:01.113 "data_offset": 2048, 00:41:01.113 "data_size": 63488 00:41:01.113 }, 00:41:01.113 { 00:41:01.113 "name": null, 00:41:01.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:01.113 "is_configured": false, 00:41:01.113 "data_offset": 2048, 00:41:01.113 "data_size": 63488 00:41:01.113 }, 00:41:01.113 { 00:41:01.113 "name": null, 00:41:01.113 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:01.113 "is_configured": false, 00:41:01.113 "data_offset": 2048, 00:41:01.113 "data_size": 63488 00:41:01.113 }, 00:41:01.113 { 00:41:01.113 "name": null, 00:41:01.113 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:01.113 "is_configured": false, 00:41:01.113 "data_offset": 2048, 00:41:01.113 "data_size": 63488 00:41:01.113 } 00:41:01.113 ] 00:41:01.113 }' 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:01.113 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.372 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:41:01.372 23:22:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:01.372 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.372 23:22:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.372 [2024-12-09 23:22:42.006148] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:01.372 [2024-12-09 23:22:42.006464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:01.372 [2024-12-09 23:22:42.006617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:41:01.372 [2024-12-09 23:22:42.006715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:01.372 [2024-12-09 23:22:42.007268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:01.631 [2024-12-09 23:22:42.007437] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:01.631 [2024-12-09 23:22:42.007544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:41:01.631 [2024-12-09 23:22:42.007586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:01.631 pt2 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.631 [2024-12-09 23:22:42.018115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:01.631 "name": "raid_bdev1", 00:41:01.631 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:01.631 "strip_size_kb": 64, 00:41:01.631 "state": "configuring", 00:41:01.631 "raid_level": "raid5f", 00:41:01.631 "superblock": true, 00:41:01.631 "num_base_bdevs": 4, 00:41:01.631 "num_base_bdevs_discovered": 1, 00:41:01.631 "num_base_bdevs_operational": 4, 00:41:01.631 "base_bdevs_list": [ 00:41:01.631 { 00:41:01.631 "name": "pt1", 00:41:01.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:01.631 "is_configured": true, 00:41:01.631 "data_offset": 2048, 00:41:01.631 "data_size": 63488 00:41:01.631 }, 00:41:01.631 { 00:41:01.631 "name": null, 00:41:01.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:01.631 "is_configured": false, 00:41:01.631 "data_offset": 0, 00:41:01.631 "data_size": 63488 00:41:01.631 }, 00:41:01.631 { 00:41:01.631 "name": null, 00:41:01.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:01.631 "is_configured": false, 00:41:01.631 "data_offset": 2048, 00:41:01.631 "data_size": 63488 00:41:01.631 }, 00:41:01.631 { 00:41:01.631 "name": null, 00:41:01.631 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:01.631 "is_configured": false, 00:41:01.631 "data_offset": 2048, 00:41:01.631 "data_size": 63488 00:41:01.631 } 00:41:01.631 ] 00:41:01.631 }' 00:41:01.631 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:01.632 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.891 [2024-12-09 23:22:42.477578] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:01.891 [2024-12-09 23:22:42.477864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:01.891 [2024-12-09 23:22:42.477898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:41:01.891 [2024-12-09 23:22:42.477911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:01.891 [2024-12-09 23:22:42.478381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:01.891 [2024-12-09 23:22:42.478417] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:01.891 [2024-12-09 23:22:42.478519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:41:01.891 [2024-12-09 23:22:42.478544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:01.891 pt2 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.891 [2024-12-09 23:22:42.485542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:41:01.891 [2024-12-09 23:22:42.485598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:01.891 [2024-12-09 23:22:42.485620] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:41:01.891 [2024-12-09 23:22:42.485631] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:01.891 [2024-12-09 23:22:42.486011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:01.891 [2024-12-09 23:22:42.486029] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:41:01.891 [2024-12-09 23:22:42.486095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:41:01.891 [2024-12-09 23:22:42.486120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:41:01.891 pt3 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.891 [2024-12-09 23:22:42.493502] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:41:01.891 [2024-12-09 23:22:42.493556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:01.891 [2024-12-09 23:22:42.493576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:41:01.891 [2024-12-09 23:22:42.493587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:01.891 [2024-12-09 23:22:42.493964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:01.891 [2024-12-09 23:22:42.493982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:41:01.891 [2024-12-09 23:22:42.494046] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:41:01.891 [2024-12-09 23:22:42.494068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:41:01.891 [2024-12-09 23:22:42.494199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:41:01.891 [2024-12-09 23:22:42.494209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:01.891 [2024-12-09 23:22:42.494470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:41:01.891 [2024-12-09 23:22:42.502032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:41:01.891 [2024-12-09 23:22:42.502064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:41:01.891 [2024-12-09 23:22:42.502238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:01.891 pt4 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:01.891 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:02.150 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.150 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:02.150 "name": "raid_bdev1", 00:41:02.150 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:02.150 "strip_size_kb": 64, 00:41:02.150 "state": "online", 00:41:02.150 "raid_level": "raid5f", 00:41:02.150 "superblock": true, 00:41:02.150 "num_base_bdevs": 4, 00:41:02.150 "num_base_bdevs_discovered": 4, 00:41:02.150 "num_base_bdevs_operational": 4, 00:41:02.150 "base_bdevs_list": [ 00:41:02.150 { 00:41:02.150 "name": "pt1", 00:41:02.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:02.150 "is_configured": true, 00:41:02.150 "data_offset": 2048, 00:41:02.150 "data_size": 63488 00:41:02.150 }, 00:41:02.150 { 00:41:02.150 "name": "pt2", 00:41:02.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:02.150 "is_configured": true, 00:41:02.150 "data_offset": 2048, 00:41:02.150 "data_size": 63488 00:41:02.150 }, 00:41:02.150 { 00:41:02.150 "name": "pt3", 00:41:02.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:02.150 "is_configured": true, 00:41:02.150 "data_offset": 2048, 00:41:02.150 "data_size": 63488 00:41:02.150 }, 00:41:02.150 { 00:41:02.150 "name": "pt4", 00:41:02.150 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:02.150 "is_configured": true, 00:41:02.150 "data_offset": 2048, 00:41:02.150 "data_size": 63488 00:41:02.150 } 00:41:02.150 ] 00:41:02.150 }' 00:41:02.150 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:02.150 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:02.408 [2024-12-09 23:22:42.954780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.408 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:02.408 "name": "raid_bdev1", 00:41:02.408 "aliases": [ 00:41:02.408 "ab6a9081-c785-4331-828b-d913d9ccb179" 00:41:02.408 ], 00:41:02.408 "product_name": "Raid Volume", 00:41:02.408 "block_size": 512, 00:41:02.408 "num_blocks": 190464, 00:41:02.408 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:02.408 "assigned_rate_limits": { 00:41:02.408 "rw_ios_per_sec": 0, 00:41:02.408 "rw_mbytes_per_sec": 0, 00:41:02.408 "r_mbytes_per_sec": 0, 00:41:02.408 "w_mbytes_per_sec": 0 00:41:02.408 }, 00:41:02.408 "claimed": false, 00:41:02.408 "zoned": false, 00:41:02.408 "supported_io_types": { 00:41:02.408 "read": true, 00:41:02.408 "write": true, 00:41:02.408 "unmap": false, 00:41:02.408 "flush": false, 00:41:02.408 "reset": true, 00:41:02.408 "nvme_admin": false, 00:41:02.408 "nvme_io": false, 00:41:02.408 "nvme_io_md": false, 00:41:02.408 "write_zeroes": true, 00:41:02.408 "zcopy": false, 00:41:02.408 "get_zone_info": false, 00:41:02.408 "zone_management": false, 00:41:02.408 "zone_append": false, 00:41:02.408 "compare": false, 00:41:02.408 "compare_and_write": false, 00:41:02.408 "abort": false, 00:41:02.408 "seek_hole": false, 00:41:02.408 "seek_data": false, 00:41:02.408 "copy": false, 00:41:02.408 "nvme_iov_md": false 00:41:02.408 }, 00:41:02.408 "driver_specific": { 00:41:02.408 "raid": { 00:41:02.408 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:02.409 "strip_size_kb": 64, 00:41:02.409 "state": "online", 00:41:02.409 "raid_level": "raid5f", 00:41:02.409 "superblock": true, 00:41:02.409 "num_base_bdevs": 4, 00:41:02.409 "num_base_bdevs_discovered": 4, 00:41:02.409 "num_base_bdevs_operational": 4, 00:41:02.409 "base_bdevs_list": [ 00:41:02.409 { 00:41:02.409 "name": "pt1", 00:41:02.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:02.409 "is_configured": true, 00:41:02.409 "data_offset": 2048, 00:41:02.409 "data_size": 63488 00:41:02.409 }, 00:41:02.409 { 00:41:02.409 "name": "pt2", 00:41:02.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:02.409 "is_configured": true, 00:41:02.409 "data_offset": 2048, 00:41:02.409 "data_size": 63488 00:41:02.409 }, 00:41:02.409 { 00:41:02.409 "name": "pt3", 00:41:02.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:02.409 "is_configured": true, 00:41:02.409 "data_offset": 2048, 00:41:02.409 "data_size": 63488 00:41:02.409 }, 00:41:02.409 { 00:41:02.409 "name": "pt4", 00:41:02.409 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:02.409 "is_configured": true, 00:41:02.409 "data_offset": 2048, 00:41:02.409 "data_size": 63488 00:41:02.409 } 00:41:02.409 ] 00:41:02.409 } 00:41:02.409 } 00:41:02.409 }' 00:41:02.409 23:22:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:02.409 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:41:02.409 pt2 00:41:02.409 pt3 00:41:02.409 pt4' 00:41:02.409 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.668 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:02.668 [2024-12-09 23:22:43.290748] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ab6a9081-c785-4331-828b-d913d9ccb179 '!=' ab6a9081-c785-4331-828b-d913d9ccb179 ']' 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:02.927 [2024-12-09 23:22:43.334634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:02.927 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:02.928 "name": "raid_bdev1", 00:41:02.928 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:02.928 "strip_size_kb": 64, 00:41:02.928 "state": "online", 00:41:02.928 "raid_level": "raid5f", 00:41:02.928 "superblock": true, 00:41:02.928 "num_base_bdevs": 4, 00:41:02.928 "num_base_bdevs_discovered": 3, 00:41:02.928 "num_base_bdevs_operational": 3, 00:41:02.928 "base_bdevs_list": [ 00:41:02.928 { 00:41:02.928 "name": null, 00:41:02.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:02.928 "is_configured": false, 00:41:02.928 "data_offset": 0, 00:41:02.928 "data_size": 63488 00:41:02.928 }, 00:41:02.928 { 00:41:02.928 "name": "pt2", 00:41:02.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:02.928 "is_configured": true, 00:41:02.928 "data_offset": 2048, 00:41:02.928 "data_size": 63488 00:41:02.928 }, 00:41:02.928 { 00:41:02.928 "name": "pt3", 00:41:02.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:02.928 "is_configured": true, 00:41:02.928 "data_offset": 2048, 00:41:02.928 "data_size": 63488 00:41:02.928 }, 00:41:02.928 { 00:41:02.928 "name": "pt4", 00:41:02.928 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:02.928 "is_configured": true, 00:41:02.928 "data_offset": 2048, 00:41:02.928 "data_size": 63488 00:41:02.928 } 00:41:02.928 ] 00:41:02.928 }' 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:02.928 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.187 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:03.187 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.187 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.187 [2024-12-09 23:22:43.805996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:03.187 [2024-12-09 23:22:43.806033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:03.187 [2024-12-09 23:22:43.806116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:03.187 [2024-12-09 23:22:43.806194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:03.187 [2024-12-09 23:22:43.806207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:41:03.187 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.187 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:41:03.187 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:03.187 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.187 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.445 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.445 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:41:03.445 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:41:03.445 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:41:03.445 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:41:03.445 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:41:03.445 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.445 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.446 [2024-12-09 23:22:43.885862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:03.446 [2024-12-09 23:22:43.885925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:03.446 [2024-12-09 23:22:43.885948] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:41:03.446 [2024-12-09 23:22:43.885959] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:03.446 [2024-12-09 23:22:43.888540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:03.446 [2024-12-09 23:22:43.888579] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:03.446 [2024-12-09 23:22:43.888666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:41:03.446 [2024-12-09 23:22:43.888712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:03.446 pt2 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:03.446 "name": "raid_bdev1", 00:41:03.446 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:03.446 "strip_size_kb": 64, 00:41:03.446 "state": "configuring", 00:41:03.446 "raid_level": "raid5f", 00:41:03.446 "superblock": true, 00:41:03.446 "num_base_bdevs": 4, 00:41:03.446 "num_base_bdevs_discovered": 1, 00:41:03.446 "num_base_bdevs_operational": 3, 00:41:03.446 "base_bdevs_list": [ 00:41:03.446 { 00:41:03.446 "name": null, 00:41:03.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:03.446 "is_configured": false, 00:41:03.446 "data_offset": 2048, 00:41:03.446 "data_size": 63488 00:41:03.446 }, 00:41:03.446 { 00:41:03.446 "name": "pt2", 00:41:03.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:03.446 "is_configured": true, 00:41:03.446 "data_offset": 2048, 00:41:03.446 "data_size": 63488 00:41:03.446 }, 00:41:03.446 { 00:41:03.446 "name": null, 00:41:03.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:03.446 "is_configured": false, 00:41:03.446 "data_offset": 2048, 00:41:03.446 "data_size": 63488 00:41:03.446 }, 00:41:03.446 { 00:41:03.446 "name": null, 00:41:03.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:03.446 "is_configured": false, 00:41:03.446 "data_offset": 2048, 00:41:03.446 "data_size": 63488 00:41:03.446 } 00:41:03.446 ] 00:41:03.446 }' 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:03.446 23:22:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.705 [2024-12-09 23:22:44.265558] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:41:03.705 [2024-12-09 23:22:44.265774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:03.705 [2024-12-09 23:22:44.265810] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:41:03.705 [2024-12-09 23:22:44.265822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:03.705 [2024-12-09 23:22:44.266274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:03.705 [2024-12-09 23:22:44.266293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:41:03.705 [2024-12-09 23:22:44.266385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:41:03.705 [2024-12-09 23:22:44.266438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:41:03.705 pt3 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:03.705 "name": "raid_bdev1", 00:41:03.705 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:03.705 "strip_size_kb": 64, 00:41:03.705 "state": "configuring", 00:41:03.705 "raid_level": "raid5f", 00:41:03.705 "superblock": true, 00:41:03.705 "num_base_bdevs": 4, 00:41:03.705 "num_base_bdevs_discovered": 2, 00:41:03.705 "num_base_bdevs_operational": 3, 00:41:03.705 "base_bdevs_list": [ 00:41:03.705 { 00:41:03.705 "name": null, 00:41:03.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:03.705 "is_configured": false, 00:41:03.705 "data_offset": 2048, 00:41:03.705 "data_size": 63488 00:41:03.705 }, 00:41:03.705 { 00:41:03.705 "name": "pt2", 00:41:03.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:03.705 "is_configured": true, 00:41:03.705 "data_offset": 2048, 00:41:03.705 "data_size": 63488 00:41:03.705 }, 00:41:03.705 { 00:41:03.705 "name": "pt3", 00:41:03.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:03.705 "is_configured": true, 00:41:03.705 "data_offset": 2048, 00:41:03.705 "data_size": 63488 00:41:03.705 }, 00:41:03.705 { 00:41:03.705 "name": null, 00:41:03.705 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:03.705 "is_configured": false, 00:41:03.705 "data_offset": 2048, 00:41:03.705 "data_size": 63488 00:41:03.705 } 00:41:03.705 ] 00:41:03.705 }' 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:03.705 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:04.273 [2024-12-09 23:22:44.673590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:41:04.273 [2024-12-09 23:22:44.673662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:04.273 [2024-12-09 23:22:44.673692] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:41:04.273 [2024-12-09 23:22:44.673704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:04.273 [2024-12-09 23:22:44.674162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:04.273 [2024-12-09 23:22:44.674183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:41:04.273 [2024-12-09 23:22:44.674277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:41:04.273 [2024-12-09 23:22:44.674309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:41:04.273 [2024-12-09 23:22:44.674469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:41:04.273 [2024-12-09 23:22:44.674481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:04.273 [2024-12-09 23:22:44.674760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:41:04.273 [2024-12-09 23:22:44.681593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:41:04.273 pt4 00:41:04.273 [2024-12-09 23:22:44.681739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:41:04.273 [2024-12-09 23:22:44.682072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:04.273 "name": "raid_bdev1", 00:41:04.273 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:04.273 "strip_size_kb": 64, 00:41:04.273 "state": "online", 00:41:04.273 "raid_level": "raid5f", 00:41:04.273 "superblock": true, 00:41:04.273 "num_base_bdevs": 4, 00:41:04.273 "num_base_bdevs_discovered": 3, 00:41:04.273 "num_base_bdevs_operational": 3, 00:41:04.273 "base_bdevs_list": [ 00:41:04.273 { 00:41:04.273 "name": null, 00:41:04.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:04.273 "is_configured": false, 00:41:04.273 "data_offset": 2048, 00:41:04.273 "data_size": 63488 00:41:04.273 }, 00:41:04.273 { 00:41:04.273 "name": "pt2", 00:41:04.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:04.273 "is_configured": true, 00:41:04.273 "data_offset": 2048, 00:41:04.273 "data_size": 63488 00:41:04.273 }, 00:41:04.273 { 00:41:04.273 "name": "pt3", 00:41:04.273 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:04.273 "is_configured": true, 00:41:04.273 "data_offset": 2048, 00:41:04.273 "data_size": 63488 00:41:04.273 }, 00:41:04.273 { 00:41:04.273 "name": "pt4", 00:41:04.273 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:04.273 "is_configured": true, 00:41:04.273 "data_offset": 2048, 00:41:04.273 "data_size": 63488 00:41:04.273 } 00:41:04.273 ] 00:41:04.273 }' 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:04.273 23:22:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:04.533 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:04.533 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.533 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:04.533 [2024-12-09 23:22:45.122322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:04.533 [2024-12-09 23:22:45.122501] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:04.533 [2024-12-09 23:22:45.122611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:04.533 [2024-12-09 23:22:45.122688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:04.533 [2024-12-09 23:22:45.122704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:41:04.533 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.533 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:04.533 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.533 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:04.533 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:41:04.533 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.792 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:41:04.792 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:41:04.792 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:41:04.792 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:41:04.792 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:41:04.792 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.792 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:04.792 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.792 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:04.792 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:04.793 [2024-12-09 23:22:45.198199] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:04.793 [2024-12-09 23:22:45.198382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:04.793 [2024-12-09 23:22:45.198440] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:41:04.793 [2024-12-09 23:22:45.198457] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:04.793 [2024-12-09 23:22:45.201028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:04.793 [2024-12-09 23:22:45.201074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:04.793 [2024-12-09 23:22:45.201166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:41:04.793 [2024-12-09 23:22:45.201218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:04.793 [2024-12-09 23:22:45.201358] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:41:04.793 [2024-12-09 23:22:45.201376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:04.793 [2024-12-09 23:22:45.201407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:41:04.793 [2024-12-09 23:22:45.201490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:04.793 [2024-12-09 23:22:45.201589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:41:04.793 pt1 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:04.793 "name": "raid_bdev1", 00:41:04.793 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:04.793 "strip_size_kb": 64, 00:41:04.793 "state": "configuring", 00:41:04.793 "raid_level": "raid5f", 00:41:04.793 "superblock": true, 00:41:04.793 "num_base_bdevs": 4, 00:41:04.793 "num_base_bdevs_discovered": 2, 00:41:04.793 "num_base_bdevs_operational": 3, 00:41:04.793 "base_bdevs_list": [ 00:41:04.793 { 00:41:04.793 "name": null, 00:41:04.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:04.793 "is_configured": false, 00:41:04.793 "data_offset": 2048, 00:41:04.793 "data_size": 63488 00:41:04.793 }, 00:41:04.793 { 00:41:04.793 "name": "pt2", 00:41:04.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:04.793 "is_configured": true, 00:41:04.793 "data_offset": 2048, 00:41:04.793 "data_size": 63488 00:41:04.793 }, 00:41:04.793 { 00:41:04.793 "name": "pt3", 00:41:04.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:04.793 "is_configured": true, 00:41:04.793 "data_offset": 2048, 00:41:04.793 "data_size": 63488 00:41:04.793 }, 00:41:04.793 { 00:41:04.793 "name": null, 00:41:04.793 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:04.793 "is_configured": false, 00:41:04.793 "data_offset": 2048, 00:41:04.793 "data_size": 63488 00:41:04.793 } 00:41:04.793 ] 00:41:04.793 }' 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:04.793 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:05.051 [2024-12-09 23:22:45.641614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:41:05.051 [2024-12-09 23:22:45.641686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:05.051 [2024-12-09 23:22:45.641715] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:41:05.051 [2024-12-09 23:22:45.641727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:05.051 [2024-12-09 23:22:45.642188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:05.051 [2024-12-09 23:22:45.642215] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:41:05.051 [2024-12-09 23:22:45.642308] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:41:05.051 [2024-12-09 23:22:45.642334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:41:05.051 [2024-12-09 23:22:45.642506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:41:05.051 [2024-12-09 23:22:45.642564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:05.051 [2024-12-09 23:22:45.642849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:41:05.051 [2024-12-09 23:22:45.650295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:41:05.051 pt4 00:41:05.051 [2024-12-09 23:22:45.650482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:41:05.051 [2024-12-09 23:22:45.650853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.051 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:05.308 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.308 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:05.308 "name": "raid_bdev1", 00:41:05.308 "uuid": "ab6a9081-c785-4331-828b-d913d9ccb179", 00:41:05.308 "strip_size_kb": 64, 00:41:05.308 "state": "online", 00:41:05.308 "raid_level": "raid5f", 00:41:05.308 "superblock": true, 00:41:05.308 "num_base_bdevs": 4, 00:41:05.308 "num_base_bdevs_discovered": 3, 00:41:05.308 "num_base_bdevs_operational": 3, 00:41:05.309 "base_bdevs_list": [ 00:41:05.309 { 00:41:05.309 "name": null, 00:41:05.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:05.309 "is_configured": false, 00:41:05.309 "data_offset": 2048, 00:41:05.309 "data_size": 63488 00:41:05.309 }, 00:41:05.309 { 00:41:05.309 "name": "pt2", 00:41:05.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:05.309 "is_configured": true, 00:41:05.309 "data_offset": 2048, 00:41:05.309 "data_size": 63488 00:41:05.309 }, 00:41:05.309 { 00:41:05.309 "name": "pt3", 00:41:05.309 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:05.309 "is_configured": true, 00:41:05.309 "data_offset": 2048, 00:41:05.309 "data_size": 63488 00:41:05.309 }, 00:41:05.309 { 00:41:05.309 "name": "pt4", 00:41:05.309 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:05.309 "is_configured": true, 00:41:05.309 "data_offset": 2048, 00:41:05.309 "data_size": 63488 00:41:05.309 } 00:41:05.309 ] 00:41:05.309 }' 00:41:05.309 23:22:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:05.309 23:22:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:05.567 [2024-12-09 23:22:46.135537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ab6a9081-c785-4331-828b-d913d9ccb179 '!=' ab6a9081-c785-4331-828b-d913d9ccb179 ']' 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83995 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83995 ']' 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83995 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:05.567 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83995 00:41:05.824 killing process with pid 83995 00:41:05.824 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:05.824 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:05.824 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83995' 00:41:05.824 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83995 00:41:05.824 23:22:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83995 00:41:05.824 [2024-12-09 23:22:46.204614] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:05.824 [2024-12-09 23:22:46.204724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:05.824 [2024-12-09 23:22:46.204818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:05.824 [2024-12-09 23:22:46.204848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:41:06.081 [2024-12-09 23:22:46.600781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:07.457 23:22:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:41:07.457 ************************************ 00:41:07.457 END TEST raid5f_superblock_test 00:41:07.457 ************************************ 00:41:07.457 00:41:07.457 real 0m8.478s 00:41:07.457 user 0m13.286s 00:41:07.457 sys 0m1.792s 00:41:07.457 23:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:07.457 23:22:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:07.457 23:22:47 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:41:07.457 23:22:47 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:41:07.457 23:22:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:41:07.457 23:22:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:07.457 23:22:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:07.457 ************************************ 00:41:07.457 START TEST raid5f_rebuild_test 00:41:07.457 ************************************ 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84479 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84479 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84479 ']' 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:07.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:07.457 23:22:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:07.457 [2024-12-09 23:22:47.942377] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:41:07.457 [2024-12-09 23:22:47.942647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:41:07.457 Zero copy mechanism will not be used. 00:41:07.457 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84479 ] 00:41:07.717 [2024-12-09 23:22:48.121105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.717 [2024-12-09 23:22:48.243523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.976 [2024-12-09 23:22:48.465741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:07.976 [2024-12-09 23:22:48.465809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.234 BaseBdev1_malloc 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.234 [2024-12-09 23:22:48.822323] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:08.234 [2024-12-09 23:22:48.822675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:08.234 [2024-12-09 23:22:48.822711] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:41:08.234 [2024-12-09 23:22:48.822726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:08.234 [2024-12-09 23:22:48.825149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:08.234 [2024-12-09 23:22:48.825193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:08.234 BaseBdev1 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.234 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.493 BaseBdev2_malloc 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.493 [2024-12-09 23:22:48.882126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:08.493 [2024-12-09 23:22:48.882198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:08.493 [2024-12-09 23:22:48.882219] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:41:08.493 [2024-12-09 23:22:48.882235] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:08.493 [2024-12-09 23:22:48.884609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:08.493 [2024-12-09 23:22:48.884778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:08.493 BaseBdev2 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.493 BaseBdev3_malloc 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.493 [2024-12-09 23:22:48.953949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:41:08.493 [2024-12-09 23:22:48.954013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:08.493 [2024-12-09 23:22:48.954038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:41:08.493 [2024-12-09 23:22:48.954052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:08.493 [2024-12-09 23:22:48.956432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:08.493 [2024-12-09 23:22:48.956591] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:41:08.493 BaseBdev3 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.493 23:22:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.493 BaseBdev4_malloc 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.493 [2024-12-09 23:22:49.014292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:41:08.493 [2024-12-09 23:22:49.014358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:08.493 [2024-12-09 23:22:49.014380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:41:08.493 [2024-12-09 23:22:49.014403] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:08.493 [2024-12-09 23:22:49.016716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:08.493 [2024-12-09 23:22:49.016761] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:41:08.493 BaseBdev4 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.493 spare_malloc 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.493 spare_delay 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.493 [2024-12-09 23:22:49.086105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:08.493 [2024-12-09 23:22:49.086166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:08.493 [2024-12-09 23:22:49.086187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:41:08.493 [2024-12-09 23:22:49.086200] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:08.493 [2024-12-09 23:22:49.088559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:08.493 [2024-12-09 23:22:49.088600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:08.493 spare 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.493 [2024-12-09 23:22:49.098137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:08.493 [2024-12-09 23:22:49.100213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:08.493 [2024-12-09 23:22:49.100406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:08.493 [2024-12-09 23:22:49.100473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:08.493 [2024-12-09 23:22:49.100564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:41:08.493 [2024-12-09 23:22:49.100578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:41:08.493 [2024-12-09 23:22:49.100860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:41:08.493 [2024-12-09 23:22:49.109200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:41:08.493 [2024-12-09 23:22:49.109315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:41:08.493 [2024-12-09 23:22:49.109653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.493 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:08.752 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.752 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:08.752 "name": "raid_bdev1", 00:41:08.752 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:08.752 "strip_size_kb": 64, 00:41:08.752 "state": "online", 00:41:08.752 "raid_level": "raid5f", 00:41:08.752 "superblock": false, 00:41:08.752 "num_base_bdevs": 4, 00:41:08.752 "num_base_bdevs_discovered": 4, 00:41:08.752 "num_base_bdevs_operational": 4, 00:41:08.752 "base_bdevs_list": [ 00:41:08.752 { 00:41:08.752 "name": "BaseBdev1", 00:41:08.752 "uuid": "0d57e837-aea1-5665-9be5-fa2491b459fd", 00:41:08.752 "is_configured": true, 00:41:08.752 "data_offset": 0, 00:41:08.752 "data_size": 65536 00:41:08.752 }, 00:41:08.752 { 00:41:08.752 "name": "BaseBdev2", 00:41:08.752 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:08.752 "is_configured": true, 00:41:08.752 "data_offset": 0, 00:41:08.752 "data_size": 65536 00:41:08.752 }, 00:41:08.752 { 00:41:08.752 "name": "BaseBdev3", 00:41:08.752 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:08.752 "is_configured": true, 00:41:08.752 "data_offset": 0, 00:41:08.752 "data_size": 65536 00:41:08.752 }, 00:41:08.752 { 00:41:08.752 "name": "BaseBdev4", 00:41:08.752 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:08.752 "is_configured": true, 00:41:08.752 "data_offset": 0, 00:41:08.752 "data_size": 65536 00:41:08.752 } 00:41:08.752 ] 00:41:08.752 }' 00:41:08.752 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:08.752 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:09.011 [2024-12-09 23:22:49.549910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:09.011 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:41:09.270 [2024-12-09 23:22:49.829509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:41:09.270 /dev/nbd0 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:09.270 1+0 records in 00:41:09.270 1+0 records out 00:41:09.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392529 s, 10.4 MB/s 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:41:09.270 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:09.530 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:09.530 23:22:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:41:09.530 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:09.530 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:09.530 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:41:09.530 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:41:09.530 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:41:09.530 23:22:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:41:09.789 512+0 records in 00:41:09.789 512+0 records out 00:41:09.789 100663296 bytes (101 MB, 96 MiB) copied, 0.494686 s, 203 MB/s 00:41:09.789 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:41:09.789 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:09.789 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:09.789 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:09.789 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:41:09.789 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:09.789 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:10.048 [2024-12-09 23:22:50.626807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:10.048 [2024-12-09 23:22:50.643879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:10.048 23:22:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.307 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:10.307 "name": "raid_bdev1", 00:41:10.307 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:10.307 "strip_size_kb": 64, 00:41:10.307 "state": "online", 00:41:10.307 "raid_level": "raid5f", 00:41:10.307 "superblock": false, 00:41:10.307 "num_base_bdevs": 4, 00:41:10.307 "num_base_bdevs_discovered": 3, 00:41:10.307 "num_base_bdevs_operational": 3, 00:41:10.307 "base_bdevs_list": [ 00:41:10.307 { 00:41:10.307 "name": null, 00:41:10.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:10.307 "is_configured": false, 00:41:10.307 "data_offset": 0, 00:41:10.307 "data_size": 65536 00:41:10.307 }, 00:41:10.307 { 00:41:10.307 "name": "BaseBdev2", 00:41:10.307 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:10.307 "is_configured": true, 00:41:10.307 "data_offset": 0, 00:41:10.307 "data_size": 65536 00:41:10.307 }, 00:41:10.307 { 00:41:10.307 "name": "BaseBdev3", 00:41:10.307 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:10.307 "is_configured": true, 00:41:10.307 "data_offset": 0, 00:41:10.307 "data_size": 65536 00:41:10.307 }, 00:41:10.307 { 00:41:10.307 "name": "BaseBdev4", 00:41:10.307 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:10.307 "is_configured": true, 00:41:10.307 "data_offset": 0, 00:41:10.307 "data_size": 65536 00:41:10.307 } 00:41:10.307 ] 00:41:10.307 }' 00:41:10.307 23:22:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:10.307 23:22:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:10.565 23:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:10.565 23:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.565 23:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:10.565 [2024-12-09 23:22:51.039347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:10.565 [2024-12-09 23:22:51.057014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:41:10.565 23:22:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.565 23:22:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:41:10.565 [2024-12-09 23:22:51.068193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:11.499 "name": "raid_bdev1", 00:41:11.499 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:11.499 "strip_size_kb": 64, 00:41:11.499 "state": "online", 00:41:11.499 "raid_level": "raid5f", 00:41:11.499 "superblock": false, 00:41:11.499 "num_base_bdevs": 4, 00:41:11.499 "num_base_bdevs_discovered": 4, 00:41:11.499 "num_base_bdevs_operational": 4, 00:41:11.499 "process": { 00:41:11.499 "type": "rebuild", 00:41:11.499 "target": "spare", 00:41:11.499 "progress": { 00:41:11.499 "blocks": 19200, 00:41:11.499 "percent": 9 00:41:11.499 } 00:41:11.499 }, 00:41:11.499 "base_bdevs_list": [ 00:41:11.499 { 00:41:11.499 "name": "spare", 00:41:11.499 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:11.499 "is_configured": true, 00:41:11.499 "data_offset": 0, 00:41:11.499 "data_size": 65536 00:41:11.499 }, 00:41:11.499 { 00:41:11.499 "name": "BaseBdev2", 00:41:11.499 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:11.499 "is_configured": true, 00:41:11.499 "data_offset": 0, 00:41:11.499 "data_size": 65536 00:41:11.499 }, 00:41:11.499 { 00:41:11.499 "name": "BaseBdev3", 00:41:11.499 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:11.499 "is_configured": true, 00:41:11.499 "data_offset": 0, 00:41:11.499 "data_size": 65536 00:41:11.499 }, 00:41:11.499 { 00:41:11.499 "name": "BaseBdev4", 00:41:11.499 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:11.499 "is_configured": true, 00:41:11.499 "data_offset": 0, 00:41:11.499 "data_size": 65536 00:41:11.499 } 00:41:11.499 ] 00:41:11.499 }' 00:41:11.499 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:11.758 [2024-12-09 23:22:52.215865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:11.758 [2024-12-09 23:22:52.277086] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:11.758 [2024-12-09 23:22:52.277179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:11.758 [2024-12-09 23:22:52.277199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:11.758 [2024-12-09 23:22:52.277212] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:11.758 "name": "raid_bdev1", 00:41:11.758 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:11.758 "strip_size_kb": 64, 00:41:11.758 "state": "online", 00:41:11.758 "raid_level": "raid5f", 00:41:11.758 "superblock": false, 00:41:11.758 "num_base_bdevs": 4, 00:41:11.758 "num_base_bdevs_discovered": 3, 00:41:11.758 "num_base_bdevs_operational": 3, 00:41:11.758 "base_bdevs_list": [ 00:41:11.758 { 00:41:11.758 "name": null, 00:41:11.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:11.758 "is_configured": false, 00:41:11.758 "data_offset": 0, 00:41:11.758 "data_size": 65536 00:41:11.758 }, 00:41:11.758 { 00:41:11.758 "name": "BaseBdev2", 00:41:11.758 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:11.758 "is_configured": true, 00:41:11.758 "data_offset": 0, 00:41:11.758 "data_size": 65536 00:41:11.758 }, 00:41:11.758 { 00:41:11.758 "name": "BaseBdev3", 00:41:11.758 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:11.758 "is_configured": true, 00:41:11.758 "data_offset": 0, 00:41:11.758 "data_size": 65536 00:41:11.758 }, 00:41:11.758 { 00:41:11.758 "name": "BaseBdev4", 00:41:11.758 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:11.758 "is_configured": true, 00:41:11.758 "data_offset": 0, 00:41:11.758 "data_size": 65536 00:41:11.758 } 00:41:11.758 ] 00:41:11.758 }' 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:11.758 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.326 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:12.326 "name": "raid_bdev1", 00:41:12.326 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:12.326 "strip_size_kb": 64, 00:41:12.326 "state": "online", 00:41:12.327 "raid_level": "raid5f", 00:41:12.327 "superblock": false, 00:41:12.327 "num_base_bdevs": 4, 00:41:12.327 "num_base_bdevs_discovered": 3, 00:41:12.327 "num_base_bdevs_operational": 3, 00:41:12.327 "base_bdevs_list": [ 00:41:12.327 { 00:41:12.327 "name": null, 00:41:12.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:12.327 "is_configured": false, 00:41:12.327 "data_offset": 0, 00:41:12.327 "data_size": 65536 00:41:12.327 }, 00:41:12.327 { 00:41:12.327 "name": "BaseBdev2", 00:41:12.327 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:12.327 "is_configured": true, 00:41:12.327 "data_offset": 0, 00:41:12.327 "data_size": 65536 00:41:12.327 }, 00:41:12.327 { 00:41:12.327 "name": "BaseBdev3", 00:41:12.327 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:12.327 "is_configured": true, 00:41:12.327 "data_offset": 0, 00:41:12.327 "data_size": 65536 00:41:12.327 }, 00:41:12.327 { 00:41:12.327 "name": "BaseBdev4", 00:41:12.327 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:12.327 "is_configured": true, 00:41:12.327 "data_offset": 0, 00:41:12.327 "data_size": 65536 00:41:12.327 } 00:41:12.327 ] 00:41:12.327 }' 00:41:12.327 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:12.327 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:12.327 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:12.327 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:12.327 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:12.327 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.327 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:12.327 [2024-12-09 23:22:52.844546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:12.327 [2024-12-09 23:22:52.859936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:41:12.327 23:22:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.327 23:22:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:41:12.327 [2024-12-09 23:22:52.870229] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:13.320 "name": "raid_bdev1", 00:41:13.320 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:13.320 "strip_size_kb": 64, 00:41:13.320 "state": "online", 00:41:13.320 "raid_level": "raid5f", 00:41:13.320 "superblock": false, 00:41:13.320 "num_base_bdevs": 4, 00:41:13.320 "num_base_bdevs_discovered": 4, 00:41:13.320 "num_base_bdevs_operational": 4, 00:41:13.320 "process": { 00:41:13.320 "type": "rebuild", 00:41:13.320 "target": "spare", 00:41:13.320 "progress": { 00:41:13.320 "blocks": 19200, 00:41:13.320 "percent": 9 00:41:13.320 } 00:41:13.320 }, 00:41:13.320 "base_bdevs_list": [ 00:41:13.320 { 00:41:13.320 "name": "spare", 00:41:13.320 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:13.320 "is_configured": true, 00:41:13.320 "data_offset": 0, 00:41:13.320 "data_size": 65536 00:41:13.320 }, 00:41:13.320 { 00:41:13.320 "name": "BaseBdev2", 00:41:13.320 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:13.320 "is_configured": true, 00:41:13.320 "data_offset": 0, 00:41:13.320 "data_size": 65536 00:41:13.320 }, 00:41:13.320 { 00:41:13.320 "name": "BaseBdev3", 00:41:13.320 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:13.320 "is_configured": true, 00:41:13.320 "data_offset": 0, 00:41:13.320 "data_size": 65536 00:41:13.320 }, 00:41:13.320 { 00:41:13.320 "name": "BaseBdev4", 00:41:13.320 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:13.320 "is_configured": true, 00:41:13.320 "data_offset": 0, 00:41:13.320 "data_size": 65536 00:41:13.320 } 00:41:13.320 ] 00:41:13.320 }' 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:13.320 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=623 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:13.579 23:22:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:13.579 23:22:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.579 23:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:13.579 "name": "raid_bdev1", 00:41:13.579 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:13.579 "strip_size_kb": 64, 00:41:13.579 "state": "online", 00:41:13.579 "raid_level": "raid5f", 00:41:13.579 "superblock": false, 00:41:13.579 "num_base_bdevs": 4, 00:41:13.579 "num_base_bdevs_discovered": 4, 00:41:13.579 "num_base_bdevs_operational": 4, 00:41:13.579 "process": { 00:41:13.579 "type": "rebuild", 00:41:13.579 "target": "spare", 00:41:13.579 "progress": { 00:41:13.579 "blocks": 21120, 00:41:13.579 "percent": 10 00:41:13.579 } 00:41:13.579 }, 00:41:13.579 "base_bdevs_list": [ 00:41:13.579 { 00:41:13.579 "name": "spare", 00:41:13.579 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:13.579 "is_configured": true, 00:41:13.579 "data_offset": 0, 00:41:13.579 "data_size": 65536 00:41:13.579 }, 00:41:13.579 { 00:41:13.579 "name": "BaseBdev2", 00:41:13.579 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:13.579 "is_configured": true, 00:41:13.579 "data_offset": 0, 00:41:13.579 "data_size": 65536 00:41:13.579 }, 00:41:13.579 { 00:41:13.579 "name": "BaseBdev3", 00:41:13.579 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:13.579 "is_configured": true, 00:41:13.579 "data_offset": 0, 00:41:13.579 "data_size": 65536 00:41:13.579 }, 00:41:13.579 { 00:41:13.579 "name": "BaseBdev4", 00:41:13.579 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:13.579 "is_configured": true, 00:41:13.579 "data_offset": 0, 00:41:13.579 "data_size": 65536 00:41:13.579 } 00:41:13.579 ] 00:41:13.579 }' 00:41:13.579 23:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:13.579 23:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:13.579 23:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:13.579 23:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:13.579 23:22:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:14.515 23:22:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.773 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:14.773 "name": "raid_bdev1", 00:41:14.773 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:14.773 "strip_size_kb": 64, 00:41:14.773 "state": "online", 00:41:14.773 "raid_level": "raid5f", 00:41:14.773 "superblock": false, 00:41:14.773 "num_base_bdevs": 4, 00:41:14.773 "num_base_bdevs_discovered": 4, 00:41:14.773 "num_base_bdevs_operational": 4, 00:41:14.773 "process": { 00:41:14.773 "type": "rebuild", 00:41:14.773 "target": "spare", 00:41:14.773 "progress": { 00:41:14.773 "blocks": 42240, 00:41:14.773 "percent": 21 00:41:14.773 } 00:41:14.773 }, 00:41:14.773 "base_bdevs_list": [ 00:41:14.773 { 00:41:14.773 "name": "spare", 00:41:14.773 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:14.773 "is_configured": true, 00:41:14.773 "data_offset": 0, 00:41:14.773 "data_size": 65536 00:41:14.773 }, 00:41:14.773 { 00:41:14.773 "name": "BaseBdev2", 00:41:14.773 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:14.773 "is_configured": true, 00:41:14.773 "data_offset": 0, 00:41:14.773 "data_size": 65536 00:41:14.773 }, 00:41:14.773 { 00:41:14.773 "name": "BaseBdev3", 00:41:14.773 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:14.773 "is_configured": true, 00:41:14.773 "data_offset": 0, 00:41:14.773 "data_size": 65536 00:41:14.773 }, 00:41:14.773 { 00:41:14.773 "name": "BaseBdev4", 00:41:14.773 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:14.773 "is_configured": true, 00:41:14.773 "data_offset": 0, 00:41:14.773 "data_size": 65536 00:41:14.773 } 00:41:14.773 ] 00:41:14.773 }' 00:41:14.773 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:14.773 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:14.773 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:14.773 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:14.773 23:22:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:15.707 "name": "raid_bdev1", 00:41:15.707 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:15.707 "strip_size_kb": 64, 00:41:15.707 "state": "online", 00:41:15.707 "raid_level": "raid5f", 00:41:15.707 "superblock": false, 00:41:15.707 "num_base_bdevs": 4, 00:41:15.707 "num_base_bdevs_discovered": 4, 00:41:15.707 "num_base_bdevs_operational": 4, 00:41:15.707 "process": { 00:41:15.707 "type": "rebuild", 00:41:15.707 "target": "spare", 00:41:15.707 "progress": { 00:41:15.707 "blocks": 63360, 00:41:15.707 "percent": 32 00:41:15.707 } 00:41:15.707 }, 00:41:15.707 "base_bdevs_list": [ 00:41:15.707 { 00:41:15.707 "name": "spare", 00:41:15.707 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:15.707 "is_configured": true, 00:41:15.707 "data_offset": 0, 00:41:15.707 "data_size": 65536 00:41:15.707 }, 00:41:15.707 { 00:41:15.707 "name": "BaseBdev2", 00:41:15.707 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:15.707 "is_configured": true, 00:41:15.707 "data_offset": 0, 00:41:15.707 "data_size": 65536 00:41:15.707 }, 00:41:15.707 { 00:41:15.707 "name": "BaseBdev3", 00:41:15.707 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:15.707 "is_configured": true, 00:41:15.707 "data_offset": 0, 00:41:15.707 "data_size": 65536 00:41:15.707 }, 00:41:15.707 { 00:41:15.707 "name": "BaseBdev4", 00:41:15.707 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:15.707 "is_configured": true, 00:41:15.707 "data_offset": 0, 00:41:15.707 "data_size": 65536 00:41:15.707 } 00:41:15.707 ] 00:41:15.707 }' 00:41:15.707 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:15.965 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:15.965 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:15.965 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:15.965 23:22:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:16.900 "name": "raid_bdev1", 00:41:16.900 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:16.900 "strip_size_kb": 64, 00:41:16.900 "state": "online", 00:41:16.900 "raid_level": "raid5f", 00:41:16.900 "superblock": false, 00:41:16.900 "num_base_bdevs": 4, 00:41:16.900 "num_base_bdevs_discovered": 4, 00:41:16.900 "num_base_bdevs_operational": 4, 00:41:16.900 "process": { 00:41:16.900 "type": "rebuild", 00:41:16.900 "target": "spare", 00:41:16.900 "progress": { 00:41:16.900 "blocks": 86400, 00:41:16.900 "percent": 43 00:41:16.900 } 00:41:16.900 }, 00:41:16.900 "base_bdevs_list": [ 00:41:16.900 { 00:41:16.900 "name": "spare", 00:41:16.900 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:16.900 "is_configured": true, 00:41:16.900 "data_offset": 0, 00:41:16.900 "data_size": 65536 00:41:16.900 }, 00:41:16.900 { 00:41:16.900 "name": "BaseBdev2", 00:41:16.900 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:16.900 "is_configured": true, 00:41:16.900 "data_offset": 0, 00:41:16.900 "data_size": 65536 00:41:16.900 }, 00:41:16.900 { 00:41:16.900 "name": "BaseBdev3", 00:41:16.900 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:16.900 "is_configured": true, 00:41:16.900 "data_offset": 0, 00:41:16.900 "data_size": 65536 00:41:16.900 }, 00:41:16.900 { 00:41:16.900 "name": "BaseBdev4", 00:41:16.900 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:16.900 "is_configured": true, 00:41:16.900 "data_offset": 0, 00:41:16.900 "data_size": 65536 00:41:16.900 } 00:41:16.900 ] 00:41:16.900 }' 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:16.900 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:17.171 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:17.171 23:22:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.104 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:18.104 "name": "raid_bdev1", 00:41:18.104 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:18.104 "strip_size_kb": 64, 00:41:18.104 "state": "online", 00:41:18.104 "raid_level": "raid5f", 00:41:18.104 "superblock": false, 00:41:18.104 "num_base_bdevs": 4, 00:41:18.104 "num_base_bdevs_discovered": 4, 00:41:18.104 "num_base_bdevs_operational": 4, 00:41:18.104 "process": { 00:41:18.104 "type": "rebuild", 00:41:18.104 "target": "spare", 00:41:18.104 "progress": { 00:41:18.104 "blocks": 107520, 00:41:18.104 "percent": 54 00:41:18.104 } 00:41:18.104 }, 00:41:18.104 "base_bdevs_list": [ 00:41:18.104 { 00:41:18.105 "name": "spare", 00:41:18.105 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:18.105 "is_configured": true, 00:41:18.105 "data_offset": 0, 00:41:18.105 "data_size": 65536 00:41:18.105 }, 00:41:18.105 { 00:41:18.105 "name": "BaseBdev2", 00:41:18.105 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:18.105 "is_configured": true, 00:41:18.105 "data_offset": 0, 00:41:18.105 "data_size": 65536 00:41:18.105 }, 00:41:18.105 { 00:41:18.105 "name": "BaseBdev3", 00:41:18.105 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:18.105 "is_configured": true, 00:41:18.105 "data_offset": 0, 00:41:18.105 "data_size": 65536 00:41:18.105 }, 00:41:18.105 { 00:41:18.105 "name": "BaseBdev4", 00:41:18.105 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:18.105 "is_configured": true, 00:41:18.105 "data_offset": 0, 00:41:18.105 "data_size": 65536 00:41:18.105 } 00:41:18.105 ] 00:41:18.105 }' 00:41:18.105 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:18.105 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:18.105 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:18.105 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:18.105 23:22:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:19.478 "name": "raid_bdev1", 00:41:19.478 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:19.478 "strip_size_kb": 64, 00:41:19.478 "state": "online", 00:41:19.478 "raid_level": "raid5f", 00:41:19.478 "superblock": false, 00:41:19.478 "num_base_bdevs": 4, 00:41:19.478 "num_base_bdevs_discovered": 4, 00:41:19.478 "num_base_bdevs_operational": 4, 00:41:19.478 "process": { 00:41:19.478 "type": "rebuild", 00:41:19.478 "target": "spare", 00:41:19.478 "progress": { 00:41:19.478 "blocks": 128640, 00:41:19.478 "percent": 65 00:41:19.478 } 00:41:19.478 }, 00:41:19.478 "base_bdevs_list": [ 00:41:19.478 { 00:41:19.478 "name": "spare", 00:41:19.478 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:19.478 "is_configured": true, 00:41:19.478 "data_offset": 0, 00:41:19.478 "data_size": 65536 00:41:19.478 }, 00:41:19.478 { 00:41:19.478 "name": "BaseBdev2", 00:41:19.478 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:19.478 "is_configured": true, 00:41:19.478 "data_offset": 0, 00:41:19.478 "data_size": 65536 00:41:19.478 }, 00:41:19.478 { 00:41:19.478 "name": "BaseBdev3", 00:41:19.478 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:19.478 "is_configured": true, 00:41:19.478 "data_offset": 0, 00:41:19.478 "data_size": 65536 00:41:19.478 }, 00:41:19.478 { 00:41:19.478 "name": "BaseBdev4", 00:41:19.478 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:19.478 "is_configured": true, 00:41:19.478 "data_offset": 0, 00:41:19.478 "data_size": 65536 00:41:19.478 } 00:41:19.478 ] 00:41:19.478 }' 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:19.478 23:22:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.414 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:20.414 "name": "raid_bdev1", 00:41:20.414 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:20.414 "strip_size_kb": 64, 00:41:20.414 "state": "online", 00:41:20.415 "raid_level": "raid5f", 00:41:20.415 "superblock": false, 00:41:20.415 "num_base_bdevs": 4, 00:41:20.415 "num_base_bdevs_discovered": 4, 00:41:20.415 "num_base_bdevs_operational": 4, 00:41:20.415 "process": { 00:41:20.415 "type": "rebuild", 00:41:20.415 "target": "spare", 00:41:20.415 "progress": { 00:41:20.415 "blocks": 151680, 00:41:20.415 "percent": 77 00:41:20.415 } 00:41:20.415 }, 00:41:20.415 "base_bdevs_list": [ 00:41:20.415 { 00:41:20.415 "name": "spare", 00:41:20.415 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:20.415 "is_configured": true, 00:41:20.415 "data_offset": 0, 00:41:20.415 "data_size": 65536 00:41:20.415 }, 00:41:20.415 { 00:41:20.415 "name": "BaseBdev2", 00:41:20.415 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:20.415 "is_configured": true, 00:41:20.415 "data_offset": 0, 00:41:20.415 "data_size": 65536 00:41:20.415 }, 00:41:20.415 { 00:41:20.415 "name": "BaseBdev3", 00:41:20.415 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:20.415 "is_configured": true, 00:41:20.415 "data_offset": 0, 00:41:20.415 "data_size": 65536 00:41:20.415 }, 00:41:20.415 { 00:41:20.415 "name": "BaseBdev4", 00:41:20.415 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:20.415 "is_configured": true, 00:41:20.415 "data_offset": 0, 00:41:20.415 "data_size": 65536 00:41:20.415 } 00:41:20.415 ] 00:41:20.415 }' 00:41:20.415 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:20.415 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:20.415 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:20.415 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:20.415 23:23:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:21.369 23:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:21.369 23:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:21.369 23:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:21.369 23:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:21.369 23:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:21.369 23:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:21.369 23:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:21.369 23:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.369 23:23:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.369 23:23:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:21.626 23:23:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.626 23:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:21.626 "name": "raid_bdev1", 00:41:21.626 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:21.626 "strip_size_kb": 64, 00:41:21.626 "state": "online", 00:41:21.626 "raid_level": "raid5f", 00:41:21.626 "superblock": false, 00:41:21.626 "num_base_bdevs": 4, 00:41:21.626 "num_base_bdevs_discovered": 4, 00:41:21.626 "num_base_bdevs_operational": 4, 00:41:21.626 "process": { 00:41:21.626 "type": "rebuild", 00:41:21.626 "target": "spare", 00:41:21.626 "progress": { 00:41:21.626 "blocks": 172800, 00:41:21.626 "percent": 87 00:41:21.626 } 00:41:21.626 }, 00:41:21.626 "base_bdevs_list": [ 00:41:21.626 { 00:41:21.626 "name": "spare", 00:41:21.626 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:21.626 "is_configured": true, 00:41:21.626 "data_offset": 0, 00:41:21.626 "data_size": 65536 00:41:21.626 }, 00:41:21.626 { 00:41:21.626 "name": "BaseBdev2", 00:41:21.626 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:21.626 "is_configured": true, 00:41:21.626 "data_offset": 0, 00:41:21.626 "data_size": 65536 00:41:21.626 }, 00:41:21.626 { 00:41:21.626 "name": "BaseBdev3", 00:41:21.626 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:21.626 "is_configured": true, 00:41:21.626 "data_offset": 0, 00:41:21.626 "data_size": 65536 00:41:21.626 }, 00:41:21.626 { 00:41:21.626 "name": "BaseBdev4", 00:41:21.626 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:21.626 "is_configured": true, 00:41:21.626 "data_offset": 0, 00:41:21.626 "data_size": 65536 00:41:21.626 } 00:41:21.626 ] 00:41:21.626 }' 00:41:21.626 23:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:21.626 23:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:21.626 23:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:21.626 23:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:21.626 23:23:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:22.562 "name": "raid_bdev1", 00:41:22.562 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:22.562 "strip_size_kb": 64, 00:41:22.562 "state": "online", 00:41:22.562 "raid_level": "raid5f", 00:41:22.562 "superblock": false, 00:41:22.562 "num_base_bdevs": 4, 00:41:22.562 "num_base_bdevs_discovered": 4, 00:41:22.562 "num_base_bdevs_operational": 4, 00:41:22.562 "process": { 00:41:22.562 "type": "rebuild", 00:41:22.562 "target": "spare", 00:41:22.562 "progress": { 00:41:22.562 "blocks": 193920, 00:41:22.562 "percent": 98 00:41:22.562 } 00:41:22.562 }, 00:41:22.562 "base_bdevs_list": [ 00:41:22.562 { 00:41:22.562 "name": "spare", 00:41:22.562 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:22.562 "is_configured": true, 00:41:22.562 "data_offset": 0, 00:41:22.562 "data_size": 65536 00:41:22.562 }, 00:41:22.562 { 00:41:22.562 "name": "BaseBdev2", 00:41:22.562 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:22.562 "is_configured": true, 00:41:22.562 "data_offset": 0, 00:41:22.562 "data_size": 65536 00:41:22.562 }, 00:41:22.562 { 00:41:22.562 "name": "BaseBdev3", 00:41:22.562 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:22.562 "is_configured": true, 00:41:22.562 "data_offset": 0, 00:41:22.562 "data_size": 65536 00:41:22.562 }, 00:41:22.562 { 00:41:22.562 "name": "BaseBdev4", 00:41:22.562 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:22.562 "is_configured": true, 00:41:22.562 "data_offset": 0, 00:41:22.562 "data_size": 65536 00:41:22.562 } 00:41:22.562 ] 00:41:22.562 }' 00:41:22.562 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:22.821 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:22.821 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:22.821 [2024-12-09 23:23:03.244163] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:22.821 [2024-12-09 23:23:03.244405] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:22.821 [2024-12-09 23:23:03.244483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:22.821 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:22.821 23:23:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:23.758 "name": "raid_bdev1", 00:41:23.758 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:23.758 "strip_size_kb": 64, 00:41:23.758 "state": "online", 00:41:23.758 "raid_level": "raid5f", 00:41:23.758 "superblock": false, 00:41:23.758 "num_base_bdevs": 4, 00:41:23.758 "num_base_bdevs_discovered": 4, 00:41:23.758 "num_base_bdevs_operational": 4, 00:41:23.758 "base_bdevs_list": [ 00:41:23.758 { 00:41:23.758 "name": "spare", 00:41:23.758 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:23.758 "is_configured": true, 00:41:23.758 "data_offset": 0, 00:41:23.758 "data_size": 65536 00:41:23.758 }, 00:41:23.758 { 00:41:23.758 "name": "BaseBdev2", 00:41:23.758 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:23.758 "is_configured": true, 00:41:23.758 "data_offset": 0, 00:41:23.758 "data_size": 65536 00:41:23.758 }, 00:41:23.758 { 00:41:23.758 "name": "BaseBdev3", 00:41:23.758 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:23.758 "is_configured": true, 00:41:23.758 "data_offset": 0, 00:41:23.758 "data_size": 65536 00:41:23.758 }, 00:41:23.758 { 00:41:23.758 "name": "BaseBdev4", 00:41:23.758 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:23.758 "is_configured": true, 00:41:23.758 "data_offset": 0, 00:41:23.758 "data_size": 65536 00:41:23.758 } 00:41:23.758 ] 00:41:23.758 }' 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:23.758 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:24.017 "name": "raid_bdev1", 00:41:24.017 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:24.017 "strip_size_kb": 64, 00:41:24.017 "state": "online", 00:41:24.017 "raid_level": "raid5f", 00:41:24.017 "superblock": false, 00:41:24.017 "num_base_bdevs": 4, 00:41:24.017 "num_base_bdevs_discovered": 4, 00:41:24.017 "num_base_bdevs_operational": 4, 00:41:24.017 "base_bdevs_list": [ 00:41:24.017 { 00:41:24.017 "name": "spare", 00:41:24.017 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:24.017 "is_configured": true, 00:41:24.017 "data_offset": 0, 00:41:24.017 "data_size": 65536 00:41:24.017 }, 00:41:24.017 { 00:41:24.017 "name": "BaseBdev2", 00:41:24.017 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:24.017 "is_configured": true, 00:41:24.017 "data_offset": 0, 00:41:24.017 "data_size": 65536 00:41:24.017 }, 00:41:24.017 { 00:41:24.017 "name": "BaseBdev3", 00:41:24.017 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:24.017 "is_configured": true, 00:41:24.017 "data_offset": 0, 00:41:24.017 "data_size": 65536 00:41:24.017 }, 00:41:24.017 { 00:41:24.017 "name": "BaseBdev4", 00:41:24.017 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:24.017 "is_configured": true, 00:41:24.017 "data_offset": 0, 00:41:24.017 "data_size": 65536 00:41:24.017 } 00:41:24.017 ] 00:41:24.017 }' 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.017 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:24.017 "name": "raid_bdev1", 00:41:24.017 "uuid": "3d703a4e-1a92-4088-b17b-acdd7cf6cad4", 00:41:24.017 "strip_size_kb": 64, 00:41:24.017 "state": "online", 00:41:24.017 "raid_level": "raid5f", 00:41:24.017 "superblock": false, 00:41:24.017 "num_base_bdevs": 4, 00:41:24.017 "num_base_bdevs_discovered": 4, 00:41:24.017 "num_base_bdevs_operational": 4, 00:41:24.017 "base_bdevs_list": [ 00:41:24.017 { 00:41:24.017 "name": "spare", 00:41:24.017 "uuid": "1c09327d-f4bf-5249-85e2-39672c4bc4ba", 00:41:24.017 "is_configured": true, 00:41:24.017 "data_offset": 0, 00:41:24.017 "data_size": 65536 00:41:24.017 }, 00:41:24.017 { 00:41:24.018 "name": "BaseBdev2", 00:41:24.018 "uuid": "168a8b2f-b5e3-5348-bb5b-6856acb7244f", 00:41:24.018 "is_configured": true, 00:41:24.018 "data_offset": 0, 00:41:24.018 "data_size": 65536 00:41:24.018 }, 00:41:24.018 { 00:41:24.018 "name": "BaseBdev3", 00:41:24.018 "uuid": "02a34a44-7520-5516-8372-4805758868a2", 00:41:24.018 "is_configured": true, 00:41:24.018 "data_offset": 0, 00:41:24.018 "data_size": 65536 00:41:24.018 }, 00:41:24.018 { 00:41:24.018 "name": "BaseBdev4", 00:41:24.018 "uuid": "a3215ab1-dcda-5dab-94eb-b6b3b41c3054", 00:41:24.018 "is_configured": true, 00:41:24.018 "data_offset": 0, 00:41:24.018 "data_size": 65536 00:41:24.018 } 00:41:24.018 ] 00:41:24.018 }' 00:41:24.018 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:24.018 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.585 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:24.585 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.585 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.585 [2024-12-09 23:23:04.966770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:24.585 [2024-12-09 23:23:04.966808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:24.585 [2024-12-09 23:23:04.966899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:24.585 [2024-12-09 23:23:04.966998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:24.585 [2024-12-09 23:23:04.967012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:41:24.585 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.585 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.585 23:23:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:41:24.585 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.585 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.585 23:23:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:24.585 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:41:24.851 /dev/nbd0 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:24.851 1+0 records in 00:41:24.851 1+0 records out 00:41:24.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353469 s, 11.6 MB/s 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:24.851 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:41:25.110 /dev/nbd1 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:25.110 1+0 records in 00:41:25.110 1+0 records out 00:41:25.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420494 s, 9.7 MB/s 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:25.110 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:41:25.368 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:41:25.368 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:25.368 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:25.368 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:25.368 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:41:25.368 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:25.368 23:23:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:41:25.627 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84479 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84479 ']' 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84479 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84479 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:25.886 killing process with pid 84479 00:41:25.886 Received shutdown signal, test time was about 60.000000 seconds 00:41:25.886 00:41:25.886 Latency(us) 00:41:25.886 [2024-12-09T23:23:06.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:25.886 [2024-12-09T23:23:06.522Z] =================================================================================================================== 00:41:25.886 [2024-12-09T23:23:06.522Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84479' 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84479 00:41:25.886 [2024-12-09 23:23:06.318279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:25.886 23:23:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84479 00:41:26.453 [2024-12-09 23:23:06.815400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:27.388 23:23:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:41:27.388 00:41:27.388 real 0m20.116s 00:41:27.388 user 0m23.888s 00:41:27.388 sys 0m2.522s 00:41:27.388 23:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:27.388 23:23:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:27.388 ************************************ 00:41:27.388 END TEST raid5f_rebuild_test 00:41:27.388 ************************************ 00:41:27.388 23:23:08 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:41:27.388 23:23:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:41:27.388 23:23:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:27.388 23:23:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:27.647 ************************************ 00:41:27.647 START TEST raid5f_rebuild_test_sb 00:41:27.647 ************************************ 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84997 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84997 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84997 ']' 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:27.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:27.647 23:23:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:27.647 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:27.647 Zero copy mechanism will not be used. 00:41:27.647 [2024-12-09 23:23:08.174554] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:41:27.647 [2024-12-09 23:23:08.174729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84997 ] 00:41:27.905 [2024-12-09 23:23:08.367490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.905 [2024-12-09 23:23:08.486004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.163 [2024-12-09 23:23:08.690127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:28.163 [2024-12-09 23:23:08.690195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:28.421 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:28.421 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:41:28.421 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:28.421 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:41:28.421 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.421 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.680 BaseBdev1_malloc 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.680 [2024-12-09 23:23:09.097956] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:28.680 [2024-12-09 23:23:09.098144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:28.680 [2024-12-09 23:23:09.098177] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:41:28.680 [2024-12-09 23:23:09.098192] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:28.680 [2024-12-09 23:23:09.100543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:28.680 [2024-12-09 23:23:09.100588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:28.680 BaseBdev1 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.680 BaseBdev2_malloc 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.680 [2024-12-09 23:23:09.152238] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:28.680 [2024-12-09 23:23:09.152307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:28.680 [2024-12-09 23:23:09.152329] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:41:28.680 [2024-12-09 23:23:09.152343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:28.680 [2024-12-09 23:23:09.154732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:28.680 [2024-12-09 23:23:09.154776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:28.680 BaseBdev2 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.680 BaseBdev3_malloc 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.680 [2024-12-09 23:23:09.222786] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:41:28.680 [2024-12-09 23:23:09.222980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:28.680 [2024-12-09 23:23:09.223013] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:41:28.680 [2024-12-09 23:23:09.223029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:28.680 [2024-12-09 23:23:09.225507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:28.680 [2024-12-09 23:23:09.225550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:41:28.680 BaseBdev3 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.680 BaseBdev4_malloc 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.680 [2024-12-09 23:23:09.279480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:41:28.680 [2024-12-09 23:23:09.279546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:28.680 [2024-12-09 23:23:09.279570] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:41:28.680 [2024-12-09 23:23:09.279584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:28.680 [2024-12-09 23:23:09.281898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:28.680 [2024-12-09 23:23:09.282070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:41:28.680 BaseBdev4 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.680 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.939 spare_malloc 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.939 spare_delay 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.939 [2024-12-09 23:23:09.349848] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:28.939 [2024-12-09 23:23:09.349905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:28.939 [2024-12-09 23:23:09.349926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:41:28.939 [2024-12-09 23:23:09.349940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:28.939 [2024-12-09 23:23:09.352293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:28.939 [2024-12-09 23:23:09.352337] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:28.939 spare 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.939 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.940 [2024-12-09 23:23:09.361889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:28.940 [2024-12-09 23:23:09.363936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:28.940 [2024-12-09 23:23:09.363997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:28.940 [2024-12-09 23:23:09.364048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:28.940 [2024-12-09 23:23:09.364237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:41:28.940 [2024-12-09 23:23:09.364252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:28.940 [2024-12-09 23:23:09.364523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:41:28.940 [2024-12-09 23:23:09.372279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:41:28.940 [2024-12-09 23:23:09.372448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:41:28.940 [2024-12-09 23:23:09.372653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:28.940 "name": "raid_bdev1", 00:41:28.940 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:28.940 "strip_size_kb": 64, 00:41:28.940 "state": "online", 00:41:28.940 "raid_level": "raid5f", 00:41:28.940 "superblock": true, 00:41:28.940 "num_base_bdevs": 4, 00:41:28.940 "num_base_bdevs_discovered": 4, 00:41:28.940 "num_base_bdevs_operational": 4, 00:41:28.940 "base_bdevs_list": [ 00:41:28.940 { 00:41:28.940 "name": "BaseBdev1", 00:41:28.940 "uuid": "045dd1c6-9fec-5abb-ab68-e6a5e16bf852", 00:41:28.940 "is_configured": true, 00:41:28.940 "data_offset": 2048, 00:41:28.940 "data_size": 63488 00:41:28.940 }, 00:41:28.940 { 00:41:28.940 "name": "BaseBdev2", 00:41:28.940 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:28.940 "is_configured": true, 00:41:28.940 "data_offset": 2048, 00:41:28.940 "data_size": 63488 00:41:28.940 }, 00:41:28.940 { 00:41:28.940 "name": "BaseBdev3", 00:41:28.940 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:28.940 "is_configured": true, 00:41:28.940 "data_offset": 2048, 00:41:28.940 "data_size": 63488 00:41:28.940 }, 00:41:28.940 { 00:41:28.940 "name": "BaseBdev4", 00:41:28.940 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:28.940 "is_configured": true, 00:41:28.940 "data_offset": 2048, 00:41:28.940 "data_size": 63488 00:41:28.940 } 00:41:28.940 ] 00:41:28.940 }' 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:28.940 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:29.199 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:41:29.199 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:29.199 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.199 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:29.199 [2024-12-09 23:23:09.777760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:29.199 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.199 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:41:29.199 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:29.199 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.199 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:29.199 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:29.457 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:29.458 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:41:29.458 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:29.458 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:29.458 23:23:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:41:29.458 [2024-12-09 23:23:10.061544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:41:29.458 /dev/nbd0 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:29.716 1+0 records in 00:41:29.716 1+0 records out 00:41:29.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345583 s, 11.9 MB/s 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:41:29.716 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:41:30.284 496+0 records in 00:41:30.284 496+0 records out 00:41:30.284 97517568 bytes (98 MB, 93 MiB) copied, 0.476752 s, 205 MB/s 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:41:30.284 [2024-12-09 23:23:10.834639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.284 [2024-12-09 23:23:10.879922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.284 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.542 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.542 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:30.542 "name": "raid_bdev1", 00:41:30.542 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:30.542 "strip_size_kb": 64, 00:41:30.542 "state": "online", 00:41:30.542 "raid_level": "raid5f", 00:41:30.542 "superblock": true, 00:41:30.542 "num_base_bdevs": 4, 00:41:30.542 "num_base_bdevs_discovered": 3, 00:41:30.542 "num_base_bdevs_operational": 3, 00:41:30.542 "base_bdevs_list": [ 00:41:30.542 { 00:41:30.542 "name": null, 00:41:30.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.542 "is_configured": false, 00:41:30.542 "data_offset": 0, 00:41:30.542 "data_size": 63488 00:41:30.542 }, 00:41:30.542 { 00:41:30.542 "name": "BaseBdev2", 00:41:30.542 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:30.542 "is_configured": true, 00:41:30.542 "data_offset": 2048, 00:41:30.542 "data_size": 63488 00:41:30.542 }, 00:41:30.542 { 00:41:30.542 "name": "BaseBdev3", 00:41:30.542 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:30.542 "is_configured": true, 00:41:30.542 "data_offset": 2048, 00:41:30.542 "data_size": 63488 00:41:30.542 }, 00:41:30.542 { 00:41:30.542 "name": "BaseBdev4", 00:41:30.542 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:30.543 "is_configured": true, 00:41:30.543 "data_offset": 2048, 00:41:30.543 "data_size": 63488 00:41:30.543 } 00:41:30.543 ] 00:41:30.543 }' 00:41:30.543 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:30.543 23:23:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.801 23:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:30.801 23:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.801 23:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.801 [2024-12-09 23:23:11.307342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:30.801 [2024-12-09 23:23:11.326146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:41:30.801 23:23:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.801 23:23:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:41:30.801 [2024-12-09 23:23:11.335857] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:31.736 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:31.736 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:31.736 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:31.736 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:31.736 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:31.736 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:31.736 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.736 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:31.736 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:31.736 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:31.996 "name": "raid_bdev1", 00:41:31.996 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:31.996 "strip_size_kb": 64, 00:41:31.996 "state": "online", 00:41:31.996 "raid_level": "raid5f", 00:41:31.996 "superblock": true, 00:41:31.996 "num_base_bdevs": 4, 00:41:31.996 "num_base_bdevs_discovered": 4, 00:41:31.996 "num_base_bdevs_operational": 4, 00:41:31.996 "process": { 00:41:31.996 "type": "rebuild", 00:41:31.996 "target": "spare", 00:41:31.996 "progress": { 00:41:31.996 "blocks": 19200, 00:41:31.996 "percent": 10 00:41:31.996 } 00:41:31.996 }, 00:41:31.996 "base_bdevs_list": [ 00:41:31.996 { 00:41:31.996 "name": "spare", 00:41:31.996 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:31.996 "is_configured": true, 00:41:31.996 "data_offset": 2048, 00:41:31.996 "data_size": 63488 00:41:31.996 }, 00:41:31.996 { 00:41:31.996 "name": "BaseBdev2", 00:41:31.996 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:31.996 "is_configured": true, 00:41:31.996 "data_offset": 2048, 00:41:31.996 "data_size": 63488 00:41:31.996 }, 00:41:31.996 { 00:41:31.996 "name": "BaseBdev3", 00:41:31.996 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:31.996 "is_configured": true, 00:41:31.996 "data_offset": 2048, 00:41:31.996 "data_size": 63488 00:41:31.996 }, 00:41:31.996 { 00:41:31.996 "name": "BaseBdev4", 00:41:31.996 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:31.996 "is_configured": true, 00:41:31.996 "data_offset": 2048, 00:41:31.996 "data_size": 63488 00:41:31.996 } 00:41:31.996 ] 00:41:31.996 }' 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:31.996 [2024-12-09 23:23:12.483487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:31.996 [2024-12-09 23:23:12.542613] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:31.996 [2024-12-09 23:23:12.542697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:31.996 [2024-12-09 23:23:12.542717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:31.996 [2024-12-09 23:23:12.542730] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:31.996 "name": "raid_bdev1", 00:41:31.996 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:31.996 "strip_size_kb": 64, 00:41:31.996 "state": "online", 00:41:31.996 "raid_level": "raid5f", 00:41:31.996 "superblock": true, 00:41:31.996 "num_base_bdevs": 4, 00:41:31.996 "num_base_bdevs_discovered": 3, 00:41:31.996 "num_base_bdevs_operational": 3, 00:41:31.996 "base_bdevs_list": [ 00:41:31.996 { 00:41:31.996 "name": null, 00:41:31.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:31.996 "is_configured": false, 00:41:31.996 "data_offset": 0, 00:41:31.996 "data_size": 63488 00:41:31.996 }, 00:41:31.996 { 00:41:31.996 "name": "BaseBdev2", 00:41:31.996 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:31.996 "is_configured": true, 00:41:31.996 "data_offset": 2048, 00:41:31.996 "data_size": 63488 00:41:31.996 }, 00:41:31.996 { 00:41:31.996 "name": "BaseBdev3", 00:41:31.996 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:31.996 "is_configured": true, 00:41:31.996 "data_offset": 2048, 00:41:31.996 "data_size": 63488 00:41:31.996 }, 00:41:31.996 { 00:41:31.996 "name": "BaseBdev4", 00:41:31.996 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:31.996 "is_configured": true, 00:41:31.996 "data_offset": 2048, 00:41:31.996 "data_size": 63488 00:41:31.996 } 00:41:31.996 ] 00:41:31.996 }' 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:31.996 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.562 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:32.562 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:32.562 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:32.562 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:32.562 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:32.562 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:32.562 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:32.562 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.563 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.563 23:23:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.563 23:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:32.563 "name": "raid_bdev1", 00:41:32.563 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:32.563 "strip_size_kb": 64, 00:41:32.563 "state": "online", 00:41:32.563 "raid_level": "raid5f", 00:41:32.563 "superblock": true, 00:41:32.563 "num_base_bdevs": 4, 00:41:32.563 "num_base_bdevs_discovered": 3, 00:41:32.563 "num_base_bdevs_operational": 3, 00:41:32.563 "base_bdevs_list": [ 00:41:32.563 { 00:41:32.563 "name": null, 00:41:32.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:32.563 "is_configured": false, 00:41:32.563 "data_offset": 0, 00:41:32.563 "data_size": 63488 00:41:32.563 }, 00:41:32.563 { 00:41:32.563 "name": "BaseBdev2", 00:41:32.563 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:32.563 "is_configured": true, 00:41:32.563 "data_offset": 2048, 00:41:32.563 "data_size": 63488 00:41:32.563 }, 00:41:32.563 { 00:41:32.563 "name": "BaseBdev3", 00:41:32.563 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:32.563 "is_configured": true, 00:41:32.563 "data_offset": 2048, 00:41:32.563 "data_size": 63488 00:41:32.563 }, 00:41:32.563 { 00:41:32.563 "name": "BaseBdev4", 00:41:32.563 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:32.563 "is_configured": true, 00:41:32.563 "data_offset": 2048, 00:41:32.563 "data_size": 63488 00:41:32.563 } 00:41:32.563 ] 00:41:32.563 }' 00:41:32.563 23:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:32.563 23:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:32.563 23:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:32.563 23:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:32.563 23:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:32.563 23:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.563 23:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.563 [2024-12-09 23:23:13.086310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:32.563 [2024-12-09 23:23:13.100967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:41:32.563 23:23:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.563 23:23:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:41:32.563 [2024-12-09 23:23:13.110261] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:33.499 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:33.499 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:33.499 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:33.499 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:33.499 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:33.499 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:33.499 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:33.499 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.499 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:33.758 "name": "raid_bdev1", 00:41:33.758 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:33.758 "strip_size_kb": 64, 00:41:33.758 "state": "online", 00:41:33.758 "raid_level": "raid5f", 00:41:33.758 "superblock": true, 00:41:33.758 "num_base_bdevs": 4, 00:41:33.758 "num_base_bdevs_discovered": 4, 00:41:33.758 "num_base_bdevs_operational": 4, 00:41:33.758 "process": { 00:41:33.758 "type": "rebuild", 00:41:33.758 "target": "spare", 00:41:33.758 "progress": { 00:41:33.758 "blocks": 19200, 00:41:33.758 "percent": 10 00:41:33.758 } 00:41:33.758 }, 00:41:33.758 "base_bdevs_list": [ 00:41:33.758 { 00:41:33.758 "name": "spare", 00:41:33.758 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:33.758 "is_configured": true, 00:41:33.758 "data_offset": 2048, 00:41:33.758 "data_size": 63488 00:41:33.758 }, 00:41:33.758 { 00:41:33.758 "name": "BaseBdev2", 00:41:33.758 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:33.758 "is_configured": true, 00:41:33.758 "data_offset": 2048, 00:41:33.758 "data_size": 63488 00:41:33.758 }, 00:41:33.758 { 00:41:33.758 "name": "BaseBdev3", 00:41:33.758 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:33.758 "is_configured": true, 00:41:33.758 "data_offset": 2048, 00:41:33.758 "data_size": 63488 00:41:33.758 }, 00:41:33.758 { 00:41:33.758 "name": "BaseBdev4", 00:41:33.758 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:33.758 "is_configured": true, 00:41:33.758 "data_offset": 2048, 00:41:33.758 "data_size": 63488 00:41:33.758 } 00:41:33.758 ] 00:41:33.758 }' 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:41:33.758 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=644 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:33.758 "name": "raid_bdev1", 00:41:33.758 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:33.758 "strip_size_kb": 64, 00:41:33.758 "state": "online", 00:41:33.758 "raid_level": "raid5f", 00:41:33.758 "superblock": true, 00:41:33.758 "num_base_bdevs": 4, 00:41:33.758 "num_base_bdevs_discovered": 4, 00:41:33.758 "num_base_bdevs_operational": 4, 00:41:33.758 "process": { 00:41:33.758 "type": "rebuild", 00:41:33.758 "target": "spare", 00:41:33.758 "progress": { 00:41:33.758 "blocks": 21120, 00:41:33.758 "percent": 11 00:41:33.758 } 00:41:33.758 }, 00:41:33.758 "base_bdevs_list": [ 00:41:33.758 { 00:41:33.758 "name": "spare", 00:41:33.758 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:33.758 "is_configured": true, 00:41:33.758 "data_offset": 2048, 00:41:33.758 "data_size": 63488 00:41:33.758 }, 00:41:33.758 { 00:41:33.758 "name": "BaseBdev2", 00:41:33.758 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:33.758 "is_configured": true, 00:41:33.758 "data_offset": 2048, 00:41:33.758 "data_size": 63488 00:41:33.758 }, 00:41:33.758 { 00:41:33.758 "name": "BaseBdev3", 00:41:33.758 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:33.758 "is_configured": true, 00:41:33.758 "data_offset": 2048, 00:41:33.758 "data_size": 63488 00:41:33.758 }, 00:41:33.758 { 00:41:33.758 "name": "BaseBdev4", 00:41:33.758 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:33.758 "is_configured": true, 00:41:33.758 "data_offset": 2048, 00:41:33.758 "data_size": 63488 00:41:33.758 } 00:41:33.758 ] 00:41:33.758 }' 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:33.758 23:23:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:35.142 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:35.142 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:35.142 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:35.142 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:35.142 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:35.142 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:35.142 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:35.142 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.142 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:35.142 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:35.143 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.143 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:35.143 "name": "raid_bdev1", 00:41:35.143 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:35.143 "strip_size_kb": 64, 00:41:35.143 "state": "online", 00:41:35.143 "raid_level": "raid5f", 00:41:35.143 "superblock": true, 00:41:35.143 "num_base_bdevs": 4, 00:41:35.143 "num_base_bdevs_discovered": 4, 00:41:35.143 "num_base_bdevs_operational": 4, 00:41:35.143 "process": { 00:41:35.143 "type": "rebuild", 00:41:35.143 "target": "spare", 00:41:35.143 "progress": { 00:41:35.143 "blocks": 42240, 00:41:35.143 "percent": 22 00:41:35.143 } 00:41:35.143 }, 00:41:35.143 "base_bdevs_list": [ 00:41:35.143 { 00:41:35.143 "name": "spare", 00:41:35.143 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:35.143 "is_configured": true, 00:41:35.143 "data_offset": 2048, 00:41:35.143 "data_size": 63488 00:41:35.143 }, 00:41:35.143 { 00:41:35.143 "name": "BaseBdev2", 00:41:35.143 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:35.143 "is_configured": true, 00:41:35.143 "data_offset": 2048, 00:41:35.143 "data_size": 63488 00:41:35.143 }, 00:41:35.143 { 00:41:35.143 "name": "BaseBdev3", 00:41:35.143 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:35.143 "is_configured": true, 00:41:35.143 "data_offset": 2048, 00:41:35.143 "data_size": 63488 00:41:35.143 }, 00:41:35.143 { 00:41:35.143 "name": "BaseBdev4", 00:41:35.143 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:35.143 "is_configured": true, 00:41:35.143 "data_offset": 2048, 00:41:35.143 "data_size": 63488 00:41:35.143 } 00:41:35.143 ] 00:41:35.143 }' 00:41:35.143 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:35.143 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:35.143 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:35.143 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:35.143 23:23:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.080 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:36.081 "name": "raid_bdev1", 00:41:36.081 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:36.081 "strip_size_kb": 64, 00:41:36.081 "state": "online", 00:41:36.081 "raid_level": "raid5f", 00:41:36.081 "superblock": true, 00:41:36.081 "num_base_bdevs": 4, 00:41:36.081 "num_base_bdevs_discovered": 4, 00:41:36.081 "num_base_bdevs_operational": 4, 00:41:36.081 "process": { 00:41:36.081 "type": "rebuild", 00:41:36.081 "target": "spare", 00:41:36.081 "progress": { 00:41:36.081 "blocks": 63360, 00:41:36.081 "percent": 33 00:41:36.081 } 00:41:36.081 }, 00:41:36.081 "base_bdevs_list": [ 00:41:36.081 { 00:41:36.081 "name": "spare", 00:41:36.081 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:36.081 "is_configured": true, 00:41:36.081 "data_offset": 2048, 00:41:36.081 "data_size": 63488 00:41:36.081 }, 00:41:36.081 { 00:41:36.081 "name": "BaseBdev2", 00:41:36.081 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:36.081 "is_configured": true, 00:41:36.081 "data_offset": 2048, 00:41:36.081 "data_size": 63488 00:41:36.081 }, 00:41:36.081 { 00:41:36.081 "name": "BaseBdev3", 00:41:36.081 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:36.081 "is_configured": true, 00:41:36.081 "data_offset": 2048, 00:41:36.081 "data_size": 63488 00:41:36.081 }, 00:41:36.081 { 00:41:36.081 "name": "BaseBdev4", 00:41:36.081 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:36.081 "is_configured": true, 00:41:36.081 "data_offset": 2048, 00:41:36.081 "data_size": 63488 00:41:36.081 } 00:41:36.081 ] 00:41:36.081 }' 00:41:36.081 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:36.081 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:36.081 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:36.081 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:36.081 23:23:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:37.018 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:37.018 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:37.018 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:37.018 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:37.018 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:37.018 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:37.018 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:37.018 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.018 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:37.018 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:37.277 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.277 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:37.277 "name": "raid_bdev1", 00:41:37.277 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:37.277 "strip_size_kb": 64, 00:41:37.277 "state": "online", 00:41:37.277 "raid_level": "raid5f", 00:41:37.277 "superblock": true, 00:41:37.277 "num_base_bdevs": 4, 00:41:37.277 "num_base_bdevs_discovered": 4, 00:41:37.277 "num_base_bdevs_operational": 4, 00:41:37.277 "process": { 00:41:37.277 "type": "rebuild", 00:41:37.277 "target": "spare", 00:41:37.277 "progress": { 00:41:37.277 "blocks": 86400, 00:41:37.277 "percent": 45 00:41:37.277 } 00:41:37.277 }, 00:41:37.277 "base_bdevs_list": [ 00:41:37.277 { 00:41:37.277 "name": "spare", 00:41:37.277 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:37.277 "is_configured": true, 00:41:37.277 "data_offset": 2048, 00:41:37.277 "data_size": 63488 00:41:37.277 }, 00:41:37.277 { 00:41:37.277 "name": "BaseBdev2", 00:41:37.277 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:37.277 "is_configured": true, 00:41:37.277 "data_offset": 2048, 00:41:37.277 "data_size": 63488 00:41:37.277 }, 00:41:37.277 { 00:41:37.277 "name": "BaseBdev3", 00:41:37.277 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:37.277 "is_configured": true, 00:41:37.277 "data_offset": 2048, 00:41:37.277 "data_size": 63488 00:41:37.277 }, 00:41:37.277 { 00:41:37.277 "name": "BaseBdev4", 00:41:37.277 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:37.277 "is_configured": true, 00:41:37.277 "data_offset": 2048, 00:41:37.277 "data_size": 63488 00:41:37.277 } 00:41:37.277 ] 00:41:37.277 }' 00:41:37.277 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:37.277 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:37.277 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:37.277 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:37.277 23:23:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:38.214 "name": "raid_bdev1", 00:41:38.214 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:38.214 "strip_size_kb": 64, 00:41:38.214 "state": "online", 00:41:38.214 "raid_level": "raid5f", 00:41:38.214 "superblock": true, 00:41:38.214 "num_base_bdevs": 4, 00:41:38.214 "num_base_bdevs_discovered": 4, 00:41:38.214 "num_base_bdevs_operational": 4, 00:41:38.214 "process": { 00:41:38.214 "type": "rebuild", 00:41:38.214 "target": "spare", 00:41:38.214 "progress": { 00:41:38.214 "blocks": 107520, 00:41:38.214 "percent": 56 00:41:38.214 } 00:41:38.214 }, 00:41:38.214 "base_bdevs_list": [ 00:41:38.214 { 00:41:38.214 "name": "spare", 00:41:38.214 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:38.214 "is_configured": true, 00:41:38.214 "data_offset": 2048, 00:41:38.214 "data_size": 63488 00:41:38.214 }, 00:41:38.214 { 00:41:38.214 "name": "BaseBdev2", 00:41:38.214 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:38.214 "is_configured": true, 00:41:38.214 "data_offset": 2048, 00:41:38.214 "data_size": 63488 00:41:38.214 }, 00:41:38.214 { 00:41:38.214 "name": "BaseBdev3", 00:41:38.214 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:38.214 "is_configured": true, 00:41:38.214 "data_offset": 2048, 00:41:38.214 "data_size": 63488 00:41:38.214 }, 00:41:38.214 { 00:41:38.214 "name": "BaseBdev4", 00:41:38.214 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:38.214 "is_configured": true, 00:41:38.214 "data_offset": 2048, 00:41:38.214 "data_size": 63488 00:41:38.214 } 00:41:38.214 ] 00:41:38.214 }' 00:41:38.214 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:38.498 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:38.498 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:38.498 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:38.498 23:23:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:39.449 "name": "raid_bdev1", 00:41:39.449 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:39.449 "strip_size_kb": 64, 00:41:39.449 "state": "online", 00:41:39.449 "raid_level": "raid5f", 00:41:39.449 "superblock": true, 00:41:39.449 "num_base_bdevs": 4, 00:41:39.449 "num_base_bdevs_discovered": 4, 00:41:39.449 "num_base_bdevs_operational": 4, 00:41:39.449 "process": { 00:41:39.449 "type": "rebuild", 00:41:39.449 "target": "spare", 00:41:39.449 "progress": { 00:41:39.449 "blocks": 128640, 00:41:39.449 "percent": 67 00:41:39.449 } 00:41:39.449 }, 00:41:39.449 "base_bdevs_list": [ 00:41:39.449 { 00:41:39.449 "name": "spare", 00:41:39.449 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:39.449 "is_configured": true, 00:41:39.449 "data_offset": 2048, 00:41:39.449 "data_size": 63488 00:41:39.449 }, 00:41:39.449 { 00:41:39.449 "name": "BaseBdev2", 00:41:39.449 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:39.449 "is_configured": true, 00:41:39.449 "data_offset": 2048, 00:41:39.449 "data_size": 63488 00:41:39.449 }, 00:41:39.449 { 00:41:39.449 "name": "BaseBdev3", 00:41:39.449 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:39.449 "is_configured": true, 00:41:39.449 "data_offset": 2048, 00:41:39.449 "data_size": 63488 00:41:39.449 }, 00:41:39.449 { 00:41:39.449 "name": "BaseBdev4", 00:41:39.449 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:39.449 "is_configured": true, 00:41:39.449 "data_offset": 2048, 00:41:39.449 "data_size": 63488 00:41:39.449 } 00:41:39.449 ] 00:41:39.449 }' 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:39.449 23:23:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:39.449 23:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:39.449 23:23:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.826 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:40.826 "name": "raid_bdev1", 00:41:40.826 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:40.826 "strip_size_kb": 64, 00:41:40.826 "state": "online", 00:41:40.826 "raid_level": "raid5f", 00:41:40.826 "superblock": true, 00:41:40.826 "num_base_bdevs": 4, 00:41:40.826 "num_base_bdevs_discovered": 4, 00:41:40.826 "num_base_bdevs_operational": 4, 00:41:40.826 "process": { 00:41:40.826 "type": "rebuild", 00:41:40.826 "target": "spare", 00:41:40.826 "progress": { 00:41:40.826 "blocks": 149760, 00:41:40.826 "percent": 78 00:41:40.826 } 00:41:40.826 }, 00:41:40.826 "base_bdevs_list": [ 00:41:40.826 { 00:41:40.826 "name": "spare", 00:41:40.826 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:40.826 "is_configured": true, 00:41:40.826 "data_offset": 2048, 00:41:40.826 "data_size": 63488 00:41:40.826 }, 00:41:40.826 { 00:41:40.826 "name": "BaseBdev2", 00:41:40.826 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:40.826 "is_configured": true, 00:41:40.826 "data_offset": 2048, 00:41:40.826 "data_size": 63488 00:41:40.826 }, 00:41:40.826 { 00:41:40.826 "name": "BaseBdev3", 00:41:40.827 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:40.827 "is_configured": true, 00:41:40.827 "data_offset": 2048, 00:41:40.827 "data_size": 63488 00:41:40.827 }, 00:41:40.827 { 00:41:40.827 "name": "BaseBdev4", 00:41:40.827 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:40.827 "is_configured": true, 00:41:40.827 "data_offset": 2048, 00:41:40.827 "data_size": 63488 00:41:40.827 } 00:41:40.827 ] 00:41:40.827 }' 00:41:40.827 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:40.827 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:40.827 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:40.827 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:40.827 23:23:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.762 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:41.762 "name": "raid_bdev1", 00:41:41.762 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:41.762 "strip_size_kb": 64, 00:41:41.762 "state": "online", 00:41:41.762 "raid_level": "raid5f", 00:41:41.762 "superblock": true, 00:41:41.762 "num_base_bdevs": 4, 00:41:41.762 "num_base_bdevs_discovered": 4, 00:41:41.762 "num_base_bdevs_operational": 4, 00:41:41.762 "process": { 00:41:41.762 "type": "rebuild", 00:41:41.762 "target": "spare", 00:41:41.762 "progress": { 00:41:41.762 "blocks": 172800, 00:41:41.762 "percent": 90 00:41:41.762 } 00:41:41.762 }, 00:41:41.762 "base_bdevs_list": [ 00:41:41.762 { 00:41:41.762 "name": "spare", 00:41:41.763 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:41.763 "is_configured": true, 00:41:41.763 "data_offset": 2048, 00:41:41.763 "data_size": 63488 00:41:41.763 }, 00:41:41.763 { 00:41:41.763 "name": "BaseBdev2", 00:41:41.763 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:41.763 "is_configured": true, 00:41:41.763 "data_offset": 2048, 00:41:41.763 "data_size": 63488 00:41:41.763 }, 00:41:41.763 { 00:41:41.763 "name": "BaseBdev3", 00:41:41.763 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:41.763 "is_configured": true, 00:41:41.763 "data_offset": 2048, 00:41:41.763 "data_size": 63488 00:41:41.763 }, 00:41:41.763 { 00:41:41.763 "name": "BaseBdev4", 00:41:41.763 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:41.763 "is_configured": true, 00:41:41.763 "data_offset": 2048, 00:41:41.763 "data_size": 63488 00:41:41.763 } 00:41:41.763 ] 00:41:41.763 }' 00:41:41.763 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:41.763 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:41.763 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:41.763 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:41.763 23:23:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:42.706 [2024-12-09 23:23:23.178517] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:42.706 [2024-12-09 23:23:23.178619] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:42.706 [2024-12-09 23:23:23.178792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:42.706 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:42.706 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:42.706 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:42.706 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:42.706 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:42.706 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:42.706 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:42.706 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.706 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:42.706 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:42.965 "name": "raid_bdev1", 00:41:42.965 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:42.965 "strip_size_kb": 64, 00:41:42.965 "state": "online", 00:41:42.965 "raid_level": "raid5f", 00:41:42.965 "superblock": true, 00:41:42.965 "num_base_bdevs": 4, 00:41:42.965 "num_base_bdevs_discovered": 4, 00:41:42.965 "num_base_bdevs_operational": 4, 00:41:42.965 "base_bdevs_list": [ 00:41:42.965 { 00:41:42.965 "name": "spare", 00:41:42.965 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:42.965 "is_configured": true, 00:41:42.965 "data_offset": 2048, 00:41:42.965 "data_size": 63488 00:41:42.965 }, 00:41:42.965 { 00:41:42.965 "name": "BaseBdev2", 00:41:42.965 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:42.965 "is_configured": true, 00:41:42.965 "data_offset": 2048, 00:41:42.965 "data_size": 63488 00:41:42.965 }, 00:41:42.965 { 00:41:42.965 "name": "BaseBdev3", 00:41:42.965 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:42.965 "is_configured": true, 00:41:42.965 "data_offset": 2048, 00:41:42.965 "data_size": 63488 00:41:42.965 }, 00:41:42.965 { 00:41:42.965 "name": "BaseBdev4", 00:41:42.965 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:42.965 "is_configured": true, 00:41:42.965 "data_offset": 2048, 00:41:42.965 "data_size": 63488 00:41:42.965 } 00:41:42.965 ] 00:41:42.965 }' 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.965 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:42.965 "name": "raid_bdev1", 00:41:42.965 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:42.965 "strip_size_kb": 64, 00:41:42.965 "state": "online", 00:41:42.965 "raid_level": "raid5f", 00:41:42.965 "superblock": true, 00:41:42.965 "num_base_bdevs": 4, 00:41:42.965 "num_base_bdevs_discovered": 4, 00:41:42.965 "num_base_bdevs_operational": 4, 00:41:42.965 "base_bdevs_list": [ 00:41:42.965 { 00:41:42.965 "name": "spare", 00:41:42.965 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:42.965 "is_configured": true, 00:41:42.965 "data_offset": 2048, 00:41:42.965 "data_size": 63488 00:41:42.965 }, 00:41:42.965 { 00:41:42.965 "name": "BaseBdev2", 00:41:42.966 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:42.966 "is_configured": true, 00:41:42.966 "data_offset": 2048, 00:41:42.966 "data_size": 63488 00:41:42.966 }, 00:41:42.966 { 00:41:42.966 "name": "BaseBdev3", 00:41:42.966 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:42.966 "is_configured": true, 00:41:42.966 "data_offset": 2048, 00:41:42.966 "data_size": 63488 00:41:42.966 }, 00:41:42.966 { 00:41:42.966 "name": "BaseBdev4", 00:41:42.966 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:42.966 "is_configured": true, 00:41:42.966 "data_offset": 2048, 00:41:42.966 "data_size": 63488 00:41:42.966 } 00:41:42.966 ] 00:41:42.966 }' 00:41:42.966 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:42.966 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:42.966 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:43.224 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:43.224 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:41:43.224 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:43.225 "name": "raid_bdev1", 00:41:43.225 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:43.225 "strip_size_kb": 64, 00:41:43.225 "state": "online", 00:41:43.225 "raid_level": "raid5f", 00:41:43.225 "superblock": true, 00:41:43.225 "num_base_bdevs": 4, 00:41:43.225 "num_base_bdevs_discovered": 4, 00:41:43.225 "num_base_bdevs_operational": 4, 00:41:43.225 "base_bdevs_list": [ 00:41:43.225 { 00:41:43.225 "name": "spare", 00:41:43.225 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:43.225 "is_configured": true, 00:41:43.225 "data_offset": 2048, 00:41:43.225 "data_size": 63488 00:41:43.225 }, 00:41:43.225 { 00:41:43.225 "name": "BaseBdev2", 00:41:43.225 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:43.225 "is_configured": true, 00:41:43.225 "data_offset": 2048, 00:41:43.225 "data_size": 63488 00:41:43.225 }, 00:41:43.225 { 00:41:43.225 "name": "BaseBdev3", 00:41:43.225 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:43.225 "is_configured": true, 00:41:43.225 "data_offset": 2048, 00:41:43.225 "data_size": 63488 00:41:43.225 }, 00:41:43.225 { 00:41:43.225 "name": "BaseBdev4", 00:41:43.225 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:43.225 "is_configured": true, 00:41:43.225 "data_offset": 2048, 00:41:43.225 "data_size": 63488 00:41:43.225 } 00:41:43.225 ] 00:41:43.225 }' 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:43.225 23:23:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:43.485 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:43.485 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.485 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:43.485 [2024-12-09 23:23:24.094616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:43.485 [2024-12-09 23:23:24.094661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:43.485 [2024-12-09 23:23:24.094755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:43.485 [2024-12-09 23:23:24.094865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:43.485 [2024-12-09 23:23:24.094893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:41:43.485 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.485 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:41:43.485 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:43.485 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.485 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:43.485 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:43.744 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:41:43.744 /dev/nbd0 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:44.003 1+0 records in 00:41:44.003 1+0 records out 00:41:44.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394263 s, 10.4 MB/s 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:44.003 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:41:44.262 /dev/nbd1 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:44.262 1+0 records in 00:41:44.262 1+0 records out 00:41:44.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545639 s, 7.5 MB/s 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:44.262 23:23:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:41:44.522 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:44.522 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:44.522 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:44.522 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:44.522 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:44.522 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:44.522 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:41:44.522 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:41:44.522 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:44.522 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:44.781 [2024-12-09 23:23:25.405110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:44.781 [2024-12-09 23:23:25.405179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:44.781 [2024-12-09 23:23:25.405210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:41:44.781 [2024-12-09 23:23:25.405225] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:44.781 [2024-12-09 23:23:25.408233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:44.781 [2024-12-09 23:23:25.408283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:44.781 [2024-12-09 23:23:25.408410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:41:44.781 [2024-12-09 23:23:25.408476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:44.781 [2024-12-09 23:23:25.408640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:44.781 [2024-12-09 23:23:25.408745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:44.781 spare 00:41:44.781 [2024-12-09 23:23:25.408837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.781 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:45.039 [2024-12-09 23:23:25.508800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:41:45.039 [2024-12-09 23:23:25.509063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:45.039 [2024-12-09 23:23:25.509552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:41:45.039 [2024-12-09 23:23:25.518403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:41:45.039 [2024-12-09 23:23:25.518429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:41:45.039 [2024-12-09 23:23:25.518704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:45.039 "name": "raid_bdev1", 00:41:45.039 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:45.039 "strip_size_kb": 64, 00:41:45.039 "state": "online", 00:41:45.039 "raid_level": "raid5f", 00:41:45.039 "superblock": true, 00:41:45.039 "num_base_bdevs": 4, 00:41:45.039 "num_base_bdevs_discovered": 4, 00:41:45.039 "num_base_bdevs_operational": 4, 00:41:45.039 "base_bdevs_list": [ 00:41:45.039 { 00:41:45.039 "name": "spare", 00:41:45.039 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:45.039 "is_configured": true, 00:41:45.039 "data_offset": 2048, 00:41:45.039 "data_size": 63488 00:41:45.039 }, 00:41:45.039 { 00:41:45.039 "name": "BaseBdev2", 00:41:45.039 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:45.039 "is_configured": true, 00:41:45.039 "data_offset": 2048, 00:41:45.039 "data_size": 63488 00:41:45.039 }, 00:41:45.039 { 00:41:45.039 "name": "BaseBdev3", 00:41:45.039 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:45.039 "is_configured": true, 00:41:45.039 "data_offset": 2048, 00:41:45.039 "data_size": 63488 00:41:45.039 }, 00:41:45.039 { 00:41:45.039 "name": "BaseBdev4", 00:41:45.039 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:45.039 "is_configured": true, 00:41:45.039 "data_offset": 2048, 00:41:45.039 "data_size": 63488 00:41:45.039 } 00:41:45.039 ] 00:41:45.039 }' 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:45.039 23:23:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.607 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:45.607 "name": "raid_bdev1", 00:41:45.607 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:45.607 "strip_size_kb": 64, 00:41:45.607 "state": "online", 00:41:45.607 "raid_level": "raid5f", 00:41:45.607 "superblock": true, 00:41:45.607 "num_base_bdevs": 4, 00:41:45.607 "num_base_bdevs_discovered": 4, 00:41:45.607 "num_base_bdevs_operational": 4, 00:41:45.607 "base_bdevs_list": [ 00:41:45.607 { 00:41:45.607 "name": "spare", 00:41:45.607 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:45.607 "is_configured": true, 00:41:45.607 "data_offset": 2048, 00:41:45.607 "data_size": 63488 00:41:45.607 }, 00:41:45.607 { 00:41:45.607 "name": "BaseBdev2", 00:41:45.607 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:45.607 "is_configured": true, 00:41:45.608 "data_offset": 2048, 00:41:45.608 "data_size": 63488 00:41:45.608 }, 00:41:45.608 { 00:41:45.608 "name": "BaseBdev3", 00:41:45.608 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:45.608 "is_configured": true, 00:41:45.608 "data_offset": 2048, 00:41:45.608 "data_size": 63488 00:41:45.608 }, 00:41:45.608 { 00:41:45.608 "name": "BaseBdev4", 00:41:45.608 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:45.608 "is_configured": true, 00:41:45.608 "data_offset": 2048, 00:41:45.608 "data_size": 63488 00:41:45.608 } 00:41:45.608 ] 00:41:45.608 }' 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:45.608 [2024-12-09 23:23:26.200039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:45.608 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.867 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:45.867 "name": "raid_bdev1", 00:41:45.867 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:45.867 "strip_size_kb": 64, 00:41:45.867 "state": "online", 00:41:45.867 "raid_level": "raid5f", 00:41:45.867 "superblock": true, 00:41:45.867 "num_base_bdevs": 4, 00:41:45.867 "num_base_bdevs_discovered": 3, 00:41:45.867 "num_base_bdevs_operational": 3, 00:41:45.867 "base_bdevs_list": [ 00:41:45.867 { 00:41:45.867 "name": null, 00:41:45.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:45.867 "is_configured": false, 00:41:45.867 "data_offset": 0, 00:41:45.867 "data_size": 63488 00:41:45.867 }, 00:41:45.867 { 00:41:45.867 "name": "BaseBdev2", 00:41:45.867 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:45.867 "is_configured": true, 00:41:45.867 "data_offset": 2048, 00:41:45.867 "data_size": 63488 00:41:45.867 }, 00:41:45.867 { 00:41:45.867 "name": "BaseBdev3", 00:41:45.867 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:45.867 "is_configured": true, 00:41:45.867 "data_offset": 2048, 00:41:45.867 "data_size": 63488 00:41:45.867 }, 00:41:45.867 { 00:41:45.867 "name": "BaseBdev4", 00:41:45.867 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:45.867 "is_configured": true, 00:41:45.867 "data_offset": 2048, 00:41:45.867 "data_size": 63488 00:41:45.867 } 00:41:45.867 ] 00:41:45.867 }' 00:41:45.867 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:45.867 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:46.126 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:46.126 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.126 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:46.126 [2024-12-09 23:23:26.675593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:46.126 [2024-12-09 23:23:26.675812] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:41:46.126 [2024-12-09 23:23:26.675839] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:41:46.126 [2024-12-09 23:23:26.675889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:46.126 [2024-12-09 23:23:26.693776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:41:46.126 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.126 23:23:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:41:46.126 [2024-12-09 23:23:26.704764] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:47.507 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:47.507 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:47.507 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:47.507 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:47.507 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:47.507 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:47.508 "name": "raid_bdev1", 00:41:47.508 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:47.508 "strip_size_kb": 64, 00:41:47.508 "state": "online", 00:41:47.508 "raid_level": "raid5f", 00:41:47.508 "superblock": true, 00:41:47.508 "num_base_bdevs": 4, 00:41:47.508 "num_base_bdevs_discovered": 4, 00:41:47.508 "num_base_bdevs_operational": 4, 00:41:47.508 "process": { 00:41:47.508 "type": "rebuild", 00:41:47.508 "target": "spare", 00:41:47.508 "progress": { 00:41:47.508 "blocks": 17280, 00:41:47.508 "percent": 9 00:41:47.508 } 00:41:47.508 }, 00:41:47.508 "base_bdevs_list": [ 00:41:47.508 { 00:41:47.508 "name": "spare", 00:41:47.508 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:47.508 "is_configured": true, 00:41:47.508 "data_offset": 2048, 00:41:47.508 "data_size": 63488 00:41:47.508 }, 00:41:47.508 { 00:41:47.508 "name": "BaseBdev2", 00:41:47.508 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:47.508 "is_configured": true, 00:41:47.508 "data_offset": 2048, 00:41:47.508 "data_size": 63488 00:41:47.508 }, 00:41:47.508 { 00:41:47.508 "name": "BaseBdev3", 00:41:47.508 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:47.508 "is_configured": true, 00:41:47.508 "data_offset": 2048, 00:41:47.508 "data_size": 63488 00:41:47.508 }, 00:41:47.508 { 00:41:47.508 "name": "BaseBdev4", 00:41:47.508 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:47.508 "is_configured": true, 00:41:47.508 "data_offset": 2048, 00:41:47.508 "data_size": 63488 00:41:47.508 } 00:41:47.508 ] 00:41:47.508 }' 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:47.508 [2024-12-09 23:23:27.836023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:47.508 [2024-12-09 23:23:27.913976] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:47.508 [2024-12-09 23:23:27.914064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:47.508 [2024-12-09 23:23:27.914088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:47.508 [2024-12-09 23:23:27.914102] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:47.508 23:23:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.508 23:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:47.508 "name": "raid_bdev1", 00:41:47.508 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:47.508 "strip_size_kb": 64, 00:41:47.508 "state": "online", 00:41:47.508 "raid_level": "raid5f", 00:41:47.508 "superblock": true, 00:41:47.508 "num_base_bdevs": 4, 00:41:47.508 "num_base_bdevs_discovered": 3, 00:41:47.508 "num_base_bdevs_operational": 3, 00:41:47.508 "base_bdevs_list": [ 00:41:47.508 { 00:41:47.508 "name": null, 00:41:47.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:47.508 "is_configured": false, 00:41:47.508 "data_offset": 0, 00:41:47.508 "data_size": 63488 00:41:47.508 }, 00:41:47.508 { 00:41:47.508 "name": "BaseBdev2", 00:41:47.508 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:47.508 "is_configured": true, 00:41:47.508 "data_offset": 2048, 00:41:47.508 "data_size": 63488 00:41:47.508 }, 00:41:47.508 { 00:41:47.508 "name": "BaseBdev3", 00:41:47.508 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:47.508 "is_configured": true, 00:41:47.508 "data_offset": 2048, 00:41:47.508 "data_size": 63488 00:41:47.508 }, 00:41:47.508 { 00:41:47.508 "name": "BaseBdev4", 00:41:47.508 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:47.508 "is_configured": true, 00:41:47.508 "data_offset": 2048, 00:41:47.508 "data_size": 63488 00:41:47.508 } 00:41:47.508 ] 00:41:47.508 }' 00:41:47.508 23:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:47.508 23:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:47.777 23:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:47.777 23:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.777 23:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:47.777 [2024-12-09 23:23:28.405561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:47.777 [2024-12-09 23:23:28.405786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:47.777 [2024-12-09 23:23:28.405861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:41:47.777 [2024-12-09 23:23:28.405977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:47.777 [2024-12-09 23:23:28.406599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:47.777 [2024-12-09 23:23:28.406631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:47.777 [2024-12-09 23:23:28.406742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:41:47.777 [2024-12-09 23:23:28.406763] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:41:47.777 [2024-12-09 23:23:28.406778] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:41:47.777 [2024-12-09 23:23:28.406812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:48.034 [2024-12-09 23:23:28.424860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:41:48.034 spare 00:41:48.034 23:23:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.034 23:23:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:41:48.034 [2024-12-09 23:23:28.435623] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:48.966 "name": "raid_bdev1", 00:41:48.966 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:48.966 "strip_size_kb": 64, 00:41:48.966 "state": "online", 00:41:48.966 "raid_level": "raid5f", 00:41:48.966 "superblock": true, 00:41:48.966 "num_base_bdevs": 4, 00:41:48.966 "num_base_bdevs_discovered": 4, 00:41:48.966 "num_base_bdevs_operational": 4, 00:41:48.966 "process": { 00:41:48.966 "type": "rebuild", 00:41:48.966 "target": "spare", 00:41:48.966 "progress": { 00:41:48.966 "blocks": 19200, 00:41:48.966 "percent": 10 00:41:48.966 } 00:41:48.966 }, 00:41:48.966 "base_bdevs_list": [ 00:41:48.966 { 00:41:48.966 "name": "spare", 00:41:48.966 "uuid": "c009e56c-cd8f-5940-8693-647932f47248", 00:41:48.966 "is_configured": true, 00:41:48.966 "data_offset": 2048, 00:41:48.966 "data_size": 63488 00:41:48.966 }, 00:41:48.966 { 00:41:48.966 "name": "BaseBdev2", 00:41:48.966 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:48.966 "is_configured": true, 00:41:48.966 "data_offset": 2048, 00:41:48.966 "data_size": 63488 00:41:48.966 }, 00:41:48.966 { 00:41:48.966 "name": "BaseBdev3", 00:41:48.966 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:48.966 "is_configured": true, 00:41:48.966 "data_offset": 2048, 00:41:48.966 "data_size": 63488 00:41:48.966 }, 00:41:48.966 { 00:41:48.966 "name": "BaseBdev4", 00:41:48.966 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:48.966 "is_configured": true, 00:41:48.966 "data_offset": 2048, 00:41:48.966 "data_size": 63488 00:41:48.966 } 00:41:48.966 ] 00:41:48.966 }' 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.966 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:48.966 [2024-12-09 23:23:29.587859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:49.224 [2024-12-09 23:23:29.644745] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:49.224 [2024-12-09 23:23:29.644838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:49.224 [2024-12-09 23:23:29.644865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:49.224 [2024-12-09 23:23:29.644876] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:49.224 "name": "raid_bdev1", 00:41:49.224 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:49.224 "strip_size_kb": 64, 00:41:49.224 "state": "online", 00:41:49.224 "raid_level": "raid5f", 00:41:49.224 "superblock": true, 00:41:49.224 "num_base_bdevs": 4, 00:41:49.224 "num_base_bdevs_discovered": 3, 00:41:49.224 "num_base_bdevs_operational": 3, 00:41:49.224 "base_bdevs_list": [ 00:41:49.224 { 00:41:49.224 "name": null, 00:41:49.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:49.224 "is_configured": false, 00:41:49.224 "data_offset": 0, 00:41:49.224 "data_size": 63488 00:41:49.224 }, 00:41:49.224 { 00:41:49.224 "name": "BaseBdev2", 00:41:49.224 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:49.224 "is_configured": true, 00:41:49.224 "data_offset": 2048, 00:41:49.224 "data_size": 63488 00:41:49.224 }, 00:41:49.224 { 00:41:49.224 "name": "BaseBdev3", 00:41:49.224 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:49.224 "is_configured": true, 00:41:49.224 "data_offset": 2048, 00:41:49.224 "data_size": 63488 00:41:49.224 }, 00:41:49.224 { 00:41:49.224 "name": "BaseBdev4", 00:41:49.224 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:49.224 "is_configured": true, 00:41:49.224 "data_offset": 2048, 00:41:49.224 "data_size": 63488 00:41:49.224 } 00:41:49.224 ] 00:41:49.224 }' 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:49.224 23:23:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:49.791 "name": "raid_bdev1", 00:41:49.791 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:49.791 "strip_size_kb": 64, 00:41:49.791 "state": "online", 00:41:49.791 "raid_level": "raid5f", 00:41:49.791 "superblock": true, 00:41:49.791 "num_base_bdevs": 4, 00:41:49.791 "num_base_bdevs_discovered": 3, 00:41:49.791 "num_base_bdevs_operational": 3, 00:41:49.791 "base_bdevs_list": [ 00:41:49.791 { 00:41:49.791 "name": null, 00:41:49.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:49.791 "is_configured": false, 00:41:49.791 "data_offset": 0, 00:41:49.791 "data_size": 63488 00:41:49.791 }, 00:41:49.791 { 00:41:49.791 "name": "BaseBdev2", 00:41:49.791 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:49.791 "is_configured": true, 00:41:49.791 "data_offset": 2048, 00:41:49.791 "data_size": 63488 00:41:49.791 }, 00:41:49.791 { 00:41:49.791 "name": "BaseBdev3", 00:41:49.791 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:49.791 "is_configured": true, 00:41:49.791 "data_offset": 2048, 00:41:49.791 "data_size": 63488 00:41:49.791 }, 00:41:49.791 { 00:41:49.791 "name": "BaseBdev4", 00:41:49.791 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:49.791 "is_configured": true, 00:41:49.791 "data_offset": 2048, 00:41:49.791 "data_size": 63488 00:41:49.791 } 00:41:49.791 ] 00:41:49.791 }' 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:49.791 [2024-12-09 23:23:30.299649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:49.791 [2024-12-09 23:23:30.299851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:49.791 [2024-12-09 23:23:30.299970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:41:49.791 [2024-12-09 23:23:30.300060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:49.791 [2024-12-09 23:23:30.300635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:49.791 [2024-12-09 23:23:30.300667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:49.791 [2024-12-09 23:23:30.300767] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:41:49.791 [2024-12-09 23:23:30.300787] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:41:49.791 [2024-12-09 23:23:30.300806] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:41:49.791 [2024-12-09 23:23:30.300819] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:41:49.791 BaseBdev1 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.791 23:23:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:50.728 "name": "raid_bdev1", 00:41:50.728 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:50.728 "strip_size_kb": 64, 00:41:50.728 "state": "online", 00:41:50.728 "raid_level": "raid5f", 00:41:50.728 "superblock": true, 00:41:50.728 "num_base_bdevs": 4, 00:41:50.728 "num_base_bdevs_discovered": 3, 00:41:50.728 "num_base_bdevs_operational": 3, 00:41:50.728 "base_bdevs_list": [ 00:41:50.728 { 00:41:50.728 "name": null, 00:41:50.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:50.728 "is_configured": false, 00:41:50.728 "data_offset": 0, 00:41:50.728 "data_size": 63488 00:41:50.728 }, 00:41:50.728 { 00:41:50.728 "name": "BaseBdev2", 00:41:50.728 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:50.728 "is_configured": true, 00:41:50.728 "data_offset": 2048, 00:41:50.728 "data_size": 63488 00:41:50.728 }, 00:41:50.728 { 00:41:50.728 "name": "BaseBdev3", 00:41:50.728 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:50.728 "is_configured": true, 00:41:50.728 "data_offset": 2048, 00:41:50.728 "data_size": 63488 00:41:50.728 }, 00:41:50.728 { 00:41:50.728 "name": "BaseBdev4", 00:41:50.728 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:50.728 "is_configured": true, 00:41:50.728 "data_offset": 2048, 00:41:50.728 "data_size": 63488 00:41:50.728 } 00:41:50.728 ] 00:41:50.728 }' 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:50.728 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:51.295 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:51.295 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:51.295 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:51.295 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:51.296 "name": "raid_bdev1", 00:41:51.296 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:51.296 "strip_size_kb": 64, 00:41:51.296 "state": "online", 00:41:51.296 "raid_level": "raid5f", 00:41:51.296 "superblock": true, 00:41:51.296 "num_base_bdevs": 4, 00:41:51.296 "num_base_bdevs_discovered": 3, 00:41:51.296 "num_base_bdevs_operational": 3, 00:41:51.296 "base_bdevs_list": [ 00:41:51.296 { 00:41:51.296 "name": null, 00:41:51.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:51.296 "is_configured": false, 00:41:51.296 "data_offset": 0, 00:41:51.296 "data_size": 63488 00:41:51.296 }, 00:41:51.296 { 00:41:51.296 "name": "BaseBdev2", 00:41:51.296 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:51.296 "is_configured": true, 00:41:51.296 "data_offset": 2048, 00:41:51.296 "data_size": 63488 00:41:51.296 }, 00:41:51.296 { 00:41:51.296 "name": "BaseBdev3", 00:41:51.296 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:51.296 "is_configured": true, 00:41:51.296 "data_offset": 2048, 00:41:51.296 "data_size": 63488 00:41:51.296 }, 00:41:51.296 { 00:41:51.296 "name": "BaseBdev4", 00:41:51.296 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:51.296 "is_configured": true, 00:41:51.296 "data_offset": 2048, 00:41:51.296 "data_size": 63488 00:41:51.296 } 00:41:51.296 ] 00:41:51.296 }' 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:51.296 [2024-12-09 23:23:31.890659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:51.296 [2024-12-09 23:23:31.890996] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:41:51.296 [2024-12-09 23:23:31.891153] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:41:51.296 request: 00:41:51.296 { 00:41:51.296 "base_bdev": "BaseBdev1", 00:41:51.296 "raid_bdev": "raid_bdev1", 00:41:51.296 "method": "bdev_raid_add_base_bdev", 00:41:51.296 "req_id": 1 00:41:51.296 } 00:41:51.296 Got JSON-RPC error response 00:41:51.296 response: 00:41:51.296 { 00:41:51.296 "code": -22, 00:41:51.296 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:41:51.296 } 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:51.296 23:23:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:52.717 "name": "raid_bdev1", 00:41:52.717 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:52.717 "strip_size_kb": 64, 00:41:52.717 "state": "online", 00:41:52.717 "raid_level": "raid5f", 00:41:52.717 "superblock": true, 00:41:52.717 "num_base_bdevs": 4, 00:41:52.717 "num_base_bdevs_discovered": 3, 00:41:52.717 "num_base_bdevs_operational": 3, 00:41:52.717 "base_bdevs_list": [ 00:41:52.717 { 00:41:52.717 "name": null, 00:41:52.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:52.717 "is_configured": false, 00:41:52.717 "data_offset": 0, 00:41:52.717 "data_size": 63488 00:41:52.717 }, 00:41:52.717 { 00:41:52.717 "name": "BaseBdev2", 00:41:52.717 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:52.717 "is_configured": true, 00:41:52.717 "data_offset": 2048, 00:41:52.717 "data_size": 63488 00:41:52.717 }, 00:41:52.717 { 00:41:52.717 "name": "BaseBdev3", 00:41:52.717 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:52.717 "is_configured": true, 00:41:52.717 "data_offset": 2048, 00:41:52.717 "data_size": 63488 00:41:52.717 }, 00:41:52.717 { 00:41:52.717 "name": "BaseBdev4", 00:41:52.717 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:52.717 "is_configured": true, 00:41:52.717 "data_offset": 2048, 00:41:52.717 "data_size": 63488 00:41:52.717 } 00:41:52.717 ] 00:41:52.717 }' 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:52.717 23:23:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:52.717 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:52.717 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:52.717 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:52.717 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:52.717 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:52.717 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:52.717 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.717 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:52.717 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:52.977 "name": "raid_bdev1", 00:41:52.977 "uuid": "376dcfcb-887e-492e-bd10-52e85c73370c", 00:41:52.977 "strip_size_kb": 64, 00:41:52.977 "state": "online", 00:41:52.977 "raid_level": "raid5f", 00:41:52.977 "superblock": true, 00:41:52.977 "num_base_bdevs": 4, 00:41:52.977 "num_base_bdevs_discovered": 3, 00:41:52.977 "num_base_bdevs_operational": 3, 00:41:52.977 "base_bdevs_list": [ 00:41:52.977 { 00:41:52.977 "name": null, 00:41:52.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:52.977 "is_configured": false, 00:41:52.977 "data_offset": 0, 00:41:52.977 "data_size": 63488 00:41:52.977 }, 00:41:52.977 { 00:41:52.977 "name": "BaseBdev2", 00:41:52.977 "uuid": "8a274a42-f820-524c-8907-fd83117bd9fa", 00:41:52.977 "is_configured": true, 00:41:52.977 "data_offset": 2048, 00:41:52.977 "data_size": 63488 00:41:52.977 }, 00:41:52.977 { 00:41:52.977 "name": "BaseBdev3", 00:41:52.977 "uuid": "5626c641-13a7-5473-bd3a-5c5709fb674f", 00:41:52.977 "is_configured": true, 00:41:52.977 "data_offset": 2048, 00:41:52.977 "data_size": 63488 00:41:52.977 }, 00:41:52.977 { 00:41:52.977 "name": "BaseBdev4", 00:41:52.977 "uuid": "cd127e4b-64f2-53f7-becd-9adb53efe756", 00:41:52.977 "is_configured": true, 00:41:52.977 "data_offset": 2048, 00:41:52.977 "data_size": 63488 00:41:52.977 } 00:41:52.977 ] 00:41:52.977 }' 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84997 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84997 ']' 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84997 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84997 00:41:52.977 killing process with pid 84997 00:41:52.977 Received shutdown signal, test time was about 60.000000 seconds 00:41:52.977 00:41:52.977 Latency(us) 00:41:52.977 [2024-12-09T23:23:33.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:52.977 [2024-12-09T23:23:33.613Z] =================================================================================================================== 00:41:52.977 [2024-12-09T23:23:33.613Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84997' 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84997 00:41:52.977 [2024-12-09 23:23:33.507877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:52.977 23:23:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84997 00:41:52.977 [2024-12-09 23:23:33.508002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:52.977 [2024-12-09 23:23:33.508079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:52.977 [2024-12-09 23:23:33.508095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:41:53.544 [2024-12-09 23:23:34.000732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:54.920 23:23:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:41:54.920 00:41:54.920 real 0m27.086s 00:41:54.920 user 0m33.924s 00:41:54.920 sys 0m3.361s 00:41:54.920 ************************************ 00:41:54.920 END TEST raid5f_rebuild_test_sb 00:41:54.920 ************************************ 00:41:54.920 23:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:54.920 23:23:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:54.920 23:23:35 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:41:54.920 23:23:35 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:41:54.920 23:23:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:41:54.920 23:23:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:54.920 23:23:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:54.920 ************************************ 00:41:54.920 START TEST raid_state_function_test_sb_4k 00:41:54.920 ************************************ 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:41:54.920 Process raid pid: 85807 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85807 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85807' 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85807 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85807 ']' 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:54.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:54.920 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:54.921 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:54.921 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:54.921 23:23:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:54.921 [2024-12-09 23:23:35.294699] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:41:54.921 [2024-12-09 23:23:35.294828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:54.921 [2024-12-09 23:23:35.476529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:55.180 [2024-12-09 23:23:35.591179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:55.180 [2024-12-09 23:23:35.797549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:55.180 [2024-12-09 23:23:35.797595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:55.747 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:55.747 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:41:55.747 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:41:55.747 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.747 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:55.747 [2024-12-09 23:23:36.155182] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:55.747 [2024-12-09 23:23:36.155379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:55.747 [2024-12-09 23:23:36.155504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:55.747 [2024-12-09 23:23:36.155628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:55.748 "name": "Existed_Raid", 00:41:55.748 "uuid": "eb9cab48-3732-4978-9a09-ff1bda11e601", 00:41:55.748 "strip_size_kb": 0, 00:41:55.748 "state": "configuring", 00:41:55.748 "raid_level": "raid1", 00:41:55.748 "superblock": true, 00:41:55.748 "num_base_bdevs": 2, 00:41:55.748 "num_base_bdevs_discovered": 0, 00:41:55.748 "num_base_bdevs_operational": 2, 00:41:55.748 "base_bdevs_list": [ 00:41:55.748 { 00:41:55.748 "name": "BaseBdev1", 00:41:55.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:55.748 "is_configured": false, 00:41:55.748 "data_offset": 0, 00:41:55.748 "data_size": 0 00:41:55.748 }, 00:41:55.748 { 00:41:55.748 "name": "BaseBdev2", 00:41:55.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:55.748 "is_configured": false, 00:41:55.748 "data_offset": 0, 00:41:55.748 "data_size": 0 00:41:55.748 } 00:41:55.748 ] 00:41:55.748 }' 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:55.748 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.007 [2024-12-09 23:23:36.578577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:56.007 [2024-12-09 23:23:36.578617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.007 [2024-12-09 23:23:36.590550] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:56.007 [2024-12-09 23:23:36.590599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:56.007 [2024-12-09 23:23:36.590610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:56.007 [2024-12-09 23:23:36.590625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.007 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.007 [2024-12-09 23:23:36.640334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:56.007 BaseBdev1 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.266 [ 00:41:56.266 { 00:41:56.266 "name": "BaseBdev1", 00:41:56.266 "aliases": [ 00:41:56.266 "60896aae-b688-4e00-91ae-75c26bf0220e" 00:41:56.266 ], 00:41:56.266 "product_name": "Malloc disk", 00:41:56.266 "block_size": 4096, 00:41:56.266 "num_blocks": 8192, 00:41:56.266 "uuid": "60896aae-b688-4e00-91ae-75c26bf0220e", 00:41:56.266 "assigned_rate_limits": { 00:41:56.266 "rw_ios_per_sec": 0, 00:41:56.266 "rw_mbytes_per_sec": 0, 00:41:56.266 "r_mbytes_per_sec": 0, 00:41:56.266 "w_mbytes_per_sec": 0 00:41:56.266 }, 00:41:56.266 "claimed": true, 00:41:56.266 "claim_type": "exclusive_write", 00:41:56.266 "zoned": false, 00:41:56.266 "supported_io_types": { 00:41:56.266 "read": true, 00:41:56.266 "write": true, 00:41:56.266 "unmap": true, 00:41:56.266 "flush": true, 00:41:56.266 "reset": true, 00:41:56.266 "nvme_admin": false, 00:41:56.266 "nvme_io": false, 00:41:56.266 "nvme_io_md": false, 00:41:56.266 "write_zeroes": true, 00:41:56.266 "zcopy": true, 00:41:56.266 "get_zone_info": false, 00:41:56.266 "zone_management": false, 00:41:56.266 "zone_append": false, 00:41:56.266 "compare": false, 00:41:56.266 "compare_and_write": false, 00:41:56.266 "abort": true, 00:41:56.266 "seek_hole": false, 00:41:56.266 "seek_data": false, 00:41:56.266 "copy": true, 00:41:56.266 "nvme_iov_md": false 00:41:56.266 }, 00:41:56.266 "memory_domains": [ 00:41:56.266 { 00:41:56.266 "dma_device_id": "system", 00:41:56.266 "dma_device_type": 1 00:41:56.266 }, 00:41:56.266 { 00:41:56.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:56.266 "dma_device_type": 2 00:41:56.266 } 00:41:56.266 ], 00:41:56.266 "driver_specific": {} 00:41:56.266 } 00:41:56.266 ] 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:56.266 "name": "Existed_Raid", 00:41:56.266 "uuid": "6da7e65a-2fa4-432d-a54c-667ad695b798", 00:41:56.266 "strip_size_kb": 0, 00:41:56.266 "state": "configuring", 00:41:56.266 "raid_level": "raid1", 00:41:56.266 "superblock": true, 00:41:56.266 "num_base_bdevs": 2, 00:41:56.266 "num_base_bdevs_discovered": 1, 00:41:56.266 "num_base_bdevs_operational": 2, 00:41:56.266 "base_bdevs_list": [ 00:41:56.266 { 00:41:56.266 "name": "BaseBdev1", 00:41:56.266 "uuid": "60896aae-b688-4e00-91ae-75c26bf0220e", 00:41:56.266 "is_configured": true, 00:41:56.266 "data_offset": 256, 00:41:56.266 "data_size": 7936 00:41:56.266 }, 00:41:56.266 { 00:41:56.266 "name": "BaseBdev2", 00:41:56.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:56.266 "is_configured": false, 00:41:56.266 "data_offset": 0, 00:41:56.266 "data_size": 0 00:41:56.266 } 00:41:56.266 ] 00:41:56.266 }' 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:56.266 23:23:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.525 [2024-12-09 23:23:37.119720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:56.525 [2024-12-09 23:23:37.119913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.525 [2024-12-09 23:23:37.131742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:56.525 [2024-12-09 23:23:37.134016] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:56.525 [2024-12-09 23:23:37.134183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.525 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:56.783 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.783 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:56.783 "name": "Existed_Raid", 00:41:56.783 "uuid": "881128a4-b312-4b51-ab8e-699c1b3bc018", 00:41:56.783 "strip_size_kb": 0, 00:41:56.783 "state": "configuring", 00:41:56.783 "raid_level": "raid1", 00:41:56.783 "superblock": true, 00:41:56.783 "num_base_bdevs": 2, 00:41:56.783 "num_base_bdevs_discovered": 1, 00:41:56.783 "num_base_bdevs_operational": 2, 00:41:56.783 "base_bdevs_list": [ 00:41:56.783 { 00:41:56.783 "name": "BaseBdev1", 00:41:56.783 "uuid": "60896aae-b688-4e00-91ae-75c26bf0220e", 00:41:56.783 "is_configured": true, 00:41:56.783 "data_offset": 256, 00:41:56.783 "data_size": 7936 00:41:56.783 }, 00:41:56.783 { 00:41:56.783 "name": "BaseBdev2", 00:41:56.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:56.783 "is_configured": false, 00:41:56.783 "data_offset": 0, 00:41:56.783 "data_size": 0 00:41:56.783 } 00:41:56.783 ] 00:41:56.783 }' 00:41:56.783 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:56.783 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:57.042 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:41:57.042 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.042 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:57.042 [2024-12-09 23:23:37.600040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:57.042 BaseBdev2 00:41:57.042 [2024-12-09 23:23:37.600805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:41:57.042 [2024-12-09 23:23:37.600830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:41:57.042 [2024-12-09 23:23:37.601104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:41:57.042 [2024-12-09 23:23:37.601272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:41:57.042 [2024-12-09 23:23:37.601289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:41:57.042 [2024-12-09 23:23:37.601457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:57.042 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:57.043 [ 00:41:57.043 { 00:41:57.043 "name": "BaseBdev2", 00:41:57.043 "aliases": [ 00:41:57.043 "1c896887-d5c5-4348-bc65-ede627839b28" 00:41:57.043 ], 00:41:57.043 "product_name": "Malloc disk", 00:41:57.043 "block_size": 4096, 00:41:57.043 "num_blocks": 8192, 00:41:57.043 "uuid": "1c896887-d5c5-4348-bc65-ede627839b28", 00:41:57.043 "assigned_rate_limits": { 00:41:57.043 "rw_ios_per_sec": 0, 00:41:57.043 "rw_mbytes_per_sec": 0, 00:41:57.043 "r_mbytes_per_sec": 0, 00:41:57.043 "w_mbytes_per_sec": 0 00:41:57.043 }, 00:41:57.043 "claimed": true, 00:41:57.043 "claim_type": "exclusive_write", 00:41:57.043 "zoned": false, 00:41:57.043 "supported_io_types": { 00:41:57.043 "read": true, 00:41:57.043 "write": true, 00:41:57.043 "unmap": true, 00:41:57.043 "flush": true, 00:41:57.043 "reset": true, 00:41:57.043 "nvme_admin": false, 00:41:57.043 "nvme_io": false, 00:41:57.043 "nvme_io_md": false, 00:41:57.043 "write_zeroes": true, 00:41:57.043 "zcopy": true, 00:41:57.043 "get_zone_info": false, 00:41:57.043 "zone_management": false, 00:41:57.043 "zone_append": false, 00:41:57.043 "compare": false, 00:41:57.043 "compare_and_write": false, 00:41:57.043 "abort": true, 00:41:57.043 "seek_hole": false, 00:41:57.043 "seek_data": false, 00:41:57.043 "copy": true, 00:41:57.043 "nvme_iov_md": false 00:41:57.043 }, 00:41:57.043 "memory_domains": [ 00:41:57.043 { 00:41:57.043 "dma_device_id": "system", 00:41:57.043 "dma_device_type": 1 00:41:57.043 }, 00:41:57.043 { 00:41:57.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:57.043 "dma_device_type": 2 00:41:57.043 } 00:41:57.043 ], 00:41:57.043 "driver_specific": {} 00:41:57.043 } 00:41:57.043 ] 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:57.043 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.301 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:57.301 "name": "Existed_Raid", 00:41:57.301 "uuid": "881128a4-b312-4b51-ab8e-699c1b3bc018", 00:41:57.301 "strip_size_kb": 0, 00:41:57.301 "state": "online", 00:41:57.301 "raid_level": "raid1", 00:41:57.301 "superblock": true, 00:41:57.301 "num_base_bdevs": 2, 00:41:57.301 "num_base_bdevs_discovered": 2, 00:41:57.301 "num_base_bdevs_operational": 2, 00:41:57.301 "base_bdevs_list": [ 00:41:57.301 { 00:41:57.301 "name": "BaseBdev1", 00:41:57.301 "uuid": "60896aae-b688-4e00-91ae-75c26bf0220e", 00:41:57.301 "is_configured": true, 00:41:57.301 "data_offset": 256, 00:41:57.301 "data_size": 7936 00:41:57.301 }, 00:41:57.301 { 00:41:57.301 "name": "BaseBdev2", 00:41:57.301 "uuid": "1c896887-d5c5-4348-bc65-ede627839b28", 00:41:57.301 "is_configured": true, 00:41:57.301 "data_offset": 256, 00:41:57.301 "data_size": 7936 00:41:57.301 } 00:41:57.301 ] 00:41:57.301 }' 00:41:57.301 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:57.301 23:23:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:41:57.560 [2024-12-09 23:23:38.099782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:57.560 "name": "Existed_Raid", 00:41:57.560 "aliases": [ 00:41:57.560 "881128a4-b312-4b51-ab8e-699c1b3bc018" 00:41:57.560 ], 00:41:57.560 "product_name": "Raid Volume", 00:41:57.560 "block_size": 4096, 00:41:57.560 "num_blocks": 7936, 00:41:57.560 "uuid": "881128a4-b312-4b51-ab8e-699c1b3bc018", 00:41:57.560 "assigned_rate_limits": { 00:41:57.560 "rw_ios_per_sec": 0, 00:41:57.560 "rw_mbytes_per_sec": 0, 00:41:57.560 "r_mbytes_per_sec": 0, 00:41:57.560 "w_mbytes_per_sec": 0 00:41:57.560 }, 00:41:57.560 "claimed": false, 00:41:57.560 "zoned": false, 00:41:57.560 "supported_io_types": { 00:41:57.560 "read": true, 00:41:57.560 "write": true, 00:41:57.560 "unmap": false, 00:41:57.560 "flush": false, 00:41:57.560 "reset": true, 00:41:57.560 "nvme_admin": false, 00:41:57.560 "nvme_io": false, 00:41:57.560 "nvme_io_md": false, 00:41:57.560 "write_zeroes": true, 00:41:57.560 "zcopy": false, 00:41:57.560 "get_zone_info": false, 00:41:57.560 "zone_management": false, 00:41:57.560 "zone_append": false, 00:41:57.560 "compare": false, 00:41:57.560 "compare_and_write": false, 00:41:57.560 "abort": false, 00:41:57.560 "seek_hole": false, 00:41:57.560 "seek_data": false, 00:41:57.560 "copy": false, 00:41:57.560 "nvme_iov_md": false 00:41:57.560 }, 00:41:57.560 "memory_domains": [ 00:41:57.560 { 00:41:57.560 "dma_device_id": "system", 00:41:57.560 "dma_device_type": 1 00:41:57.560 }, 00:41:57.560 { 00:41:57.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:57.560 "dma_device_type": 2 00:41:57.560 }, 00:41:57.560 { 00:41:57.560 "dma_device_id": "system", 00:41:57.560 "dma_device_type": 1 00:41:57.560 }, 00:41:57.560 { 00:41:57.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:57.560 "dma_device_type": 2 00:41:57.560 } 00:41:57.560 ], 00:41:57.560 "driver_specific": { 00:41:57.560 "raid": { 00:41:57.560 "uuid": "881128a4-b312-4b51-ab8e-699c1b3bc018", 00:41:57.560 "strip_size_kb": 0, 00:41:57.560 "state": "online", 00:41:57.560 "raid_level": "raid1", 00:41:57.560 "superblock": true, 00:41:57.560 "num_base_bdevs": 2, 00:41:57.560 "num_base_bdevs_discovered": 2, 00:41:57.560 "num_base_bdevs_operational": 2, 00:41:57.560 "base_bdevs_list": [ 00:41:57.560 { 00:41:57.560 "name": "BaseBdev1", 00:41:57.560 "uuid": "60896aae-b688-4e00-91ae-75c26bf0220e", 00:41:57.560 "is_configured": true, 00:41:57.560 "data_offset": 256, 00:41:57.560 "data_size": 7936 00:41:57.560 }, 00:41:57.560 { 00:41:57.560 "name": "BaseBdev2", 00:41:57.560 "uuid": "1c896887-d5c5-4348-bc65-ede627839b28", 00:41:57.560 "is_configured": true, 00:41:57.560 "data_offset": 256, 00:41:57.560 "data_size": 7936 00:41:57.560 } 00:41:57.560 ] 00:41:57.560 } 00:41:57.560 } 00:41:57.560 }' 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:41:57.560 BaseBdev2' 00:41:57.560 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:57.819 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:41:57.819 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:57.819 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:41:57.819 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:57.819 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.819 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:57.819 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.819 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:41:57.819 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:41:57.819 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:57.820 [2024-12-09 23:23:38.327422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.820 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:58.079 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.079 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:58.079 "name": "Existed_Raid", 00:41:58.079 "uuid": "881128a4-b312-4b51-ab8e-699c1b3bc018", 00:41:58.079 "strip_size_kb": 0, 00:41:58.079 "state": "online", 00:41:58.079 "raid_level": "raid1", 00:41:58.079 "superblock": true, 00:41:58.079 "num_base_bdevs": 2, 00:41:58.079 "num_base_bdevs_discovered": 1, 00:41:58.079 "num_base_bdevs_operational": 1, 00:41:58.079 "base_bdevs_list": [ 00:41:58.079 { 00:41:58.079 "name": null, 00:41:58.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:58.079 "is_configured": false, 00:41:58.079 "data_offset": 0, 00:41:58.079 "data_size": 7936 00:41:58.079 }, 00:41:58.079 { 00:41:58.079 "name": "BaseBdev2", 00:41:58.079 "uuid": "1c896887-d5c5-4348-bc65-ede627839b28", 00:41:58.079 "is_configured": true, 00:41:58.079 "data_offset": 256, 00:41:58.079 "data_size": 7936 00:41:58.079 } 00:41:58.079 ] 00:41:58.079 }' 00:41:58.079 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:58.079 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.338 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:58.338 [2024-12-09 23:23:38.903777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:41:58.338 [2024-12-09 23:23:38.904023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:58.597 [2024-12-09 23:23:38.999519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:58.597 [2024-12-09 23:23:38.999756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:58.597 [2024-12-09 23:23:38.999923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:41:58.597 23:23:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85807 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85807 ']' 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85807 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85807 00:41:58.597 killing process with pid 85807 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85807' 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85807 00:41:58.597 [2024-12-09 23:23:39.094336] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:58.597 23:23:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85807 00:41:58.597 [2024-12-09 23:23:39.110633] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:59.973 ************************************ 00:41:59.973 END TEST raid_state_function_test_sb_4k 00:41:59.973 ************************************ 00:41:59.973 23:23:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:41:59.973 00:41:59.973 real 0m5.048s 00:41:59.973 user 0m7.214s 00:41:59.973 sys 0m0.959s 00:41:59.973 23:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:59.973 23:23:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:41:59.973 23:23:40 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:41:59.973 23:23:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:59.973 23:23:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:59.973 23:23:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:59.973 ************************************ 00:41:59.973 START TEST raid_superblock_test_4k 00:41:59.973 ************************************ 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86054 00:41:59.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:59.973 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86054 00:41:59.974 23:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86054 ']' 00:41:59.974 23:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:59.974 23:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:59.974 23:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:59.974 23:23:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:41:59.974 23:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:59.974 23:23:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:41:59.974 [2024-12-09 23:23:40.407978] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:41:59.974 [2024-12-09 23:23:40.408110] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86054 ] 00:41:59.974 [2024-12-09 23:23:40.595711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:00.232 [2024-12-09 23:23:40.726543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:00.490 [2024-12-09 23:23:40.954072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:00.490 [2024-12-09 23:23:40.954134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:00.748 malloc1 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.748 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:00.748 [2024-12-09 23:23:41.332028] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:00.748 [2024-12-09 23:23:41.332240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:00.748 [2024-12-09 23:23:41.332281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:42:00.748 [2024-12-09 23:23:41.332302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:00.748 [2024-12-09 23:23:41.334935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:00.748 [2024-12-09 23:23:41.334981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:00.748 pt1 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.749 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.008 malloc2 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.008 [2024-12-09 23:23:41.391948] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:01.008 [2024-12-09 23:23:41.392152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:01.008 [2024-12-09 23:23:41.392233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:42:01.008 [2024-12-09 23:23:41.392337] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:01.008 [2024-12-09 23:23:41.395183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:01.008 [2024-12-09 23:23:41.395352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:01.008 pt2 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.008 [2024-12-09 23:23:41.404107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:01.008 [2024-12-09 23:23:41.406504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:01.008 [2024-12-09 23:23:41.406681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:42:01.008 [2024-12-09 23:23:41.406702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:01.008 [2024-12-09 23:23:41.406978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:42:01.008 [2024-12-09 23:23:41.407133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:42:01.008 [2024-12-09 23:23:41.407153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:42:01.008 [2024-12-09 23:23:41.407311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:01.008 "name": "raid_bdev1", 00:42:01.008 "uuid": "c92de365-3eb6-4354-9428-869cc6aee4f4", 00:42:01.008 "strip_size_kb": 0, 00:42:01.008 "state": "online", 00:42:01.008 "raid_level": "raid1", 00:42:01.008 "superblock": true, 00:42:01.008 "num_base_bdevs": 2, 00:42:01.008 "num_base_bdevs_discovered": 2, 00:42:01.008 "num_base_bdevs_operational": 2, 00:42:01.008 "base_bdevs_list": [ 00:42:01.008 { 00:42:01.008 "name": "pt1", 00:42:01.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:01.008 "is_configured": true, 00:42:01.008 "data_offset": 256, 00:42:01.008 "data_size": 7936 00:42:01.008 }, 00:42:01.008 { 00:42:01.008 "name": "pt2", 00:42:01.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:01.008 "is_configured": true, 00:42:01.008 "data_offset": 256, 00:42:01.008 "data_size": 7936 00:42:01.008 } 00:42:01.008 ] 00:42:01.008 }' 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:01.008 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.266 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:42:01.266 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:42:01.266 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:42:01.266 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:42:01.266 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:42:01.266 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:42:01.266 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:01.266 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.266 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:42:01.266 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.266 [2024-12-09 23:23:41.883747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:01.525 "name": "raid_bdev1", 00:42:01.525 "aliases": [ 00:42:01.525 "c92de365-3eb6-4354-9428-869cc6aee4f4" 00:42:01.525 ], 00:42:01.525 "product_name": "Raid Volume", 00:42:01.525 "block_size": 4096, 00:42:01.525 "num_blocks": 7936, 00:42:01.525 "uuid": "c92de365-3eb6-4354-9428-869cc6aee4f4", 00:42:01.525 "assigned_rate_limits": { 00:42:01.525 "rw_ios_per_sec": 0, 00:42:01.525 "rw_mbytes_per_sec": 0, 00:42:01.525 "r_mbytes_per_sec": 0, 00:42:01.525 "w_mbytes_per_sec": 0 00:42:01.525 }, 00:42:01.525 "claimed": false, 00:42:01.525 "zoned": false, 00:42:01.525 "supported_io_types": { 00:42:01.525 "read": true, 00:42:01.525 "write": true, 00:42:01.525 "unmap": false, 00:42:01.525 "flush": false, 00:42:01.525 "reset": true, 00:42:01.525 "nvme_admin": false, 00:42:01.525 "nvme_io": false, 00:42:01.525 "nvme_io_md": false, 00:42:01.525 "write_zeroes": true, 00:42:01.525 "zcopy": false, 00:42:01.525 "get_zone_info": false, 00:42:01.525 "zone_management": false, 00:42:01.525 "zone_append": false, 00:42:01.525 "compare": false, 00:42:01.525 "compare_and_write": false, 00:42:01.525 "abort": false, 00:42:01.525 "seek_hole": false, 00:42:01.525 "seek_data": false, 00:42:01.525 "copy": false, 00:42:01.525 "nvme_iov_md": false 00:42:01.525 }, 00:42:01.525 "memory_domains": [ 00:42:01.525 { 00:42:01.525 "dma_device_id": "system", 00:42:01.525 "dma_device_type": 1 00:42:01.525 }, 00:42:01.525 { 00:42:01.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:01.525 "dma_device_type": 2 00:42:01.525 }, 00:42:01.525 { 00:42:01.525 "dma_device_id": "system", 00:42:01.525 "dma_device_type": 1 00:42:01.525 }, 00:42:01.525 { 00:42:01.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:01.525 "dma_device_type": 2 00:42:01.525 } 00:42:01.525 ], 00:42:01.525 "driver_specific": { 00:42:01.525 "raid": { 00:42:01.525 "uuid": "c92de365-3eb6-4354-9428-869cc6aee4f4", 00:42:01.525 "strip_size_kb": 0, 00:42:01.525 "state": "online", 00:42:01.525 "raid_level": "raid1", 00:42:01.525 "superblock": true, 00:42:01.525 "num_base_bdevs": 2, 00:42:01.525 "num_base_bdevs_discovered": 2, 00:42:01.525 "num_base_bdevs_operational": 2, 00:42:01.525 "base_bdevs_list": [ 00:42:01.525 { 00:42:01.525 "name": "pt1", 00:42:01.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:01.525 "is_configured": true, 00:42:01.525 "data_offset": 256, 00:42:01.525 "data_size": 7936 00:42:01.525 }, 00:42:01.525 { 00:42:01.525 "name": "pt2", 00:42:01.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:01.525 "is_configured": true, 00:42:01.525 "data_offset": 256, 00:42:01.525 "data_size": 7936 00:42:01.525 } 00:42:01.525 ] 00:42:01.525 } 00:42:01.525 } 00:42:01.525 }' 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:42:01.525 pt2' 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.525 23:23:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.525 [2024-12-09 23:23:42.091406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c92de365-3eb6-4354-9428-869cc6aee4f4 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z c92de365-3eb6-4354-9428-869cc6aee4f4 ']' 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.525 [2024-12-09 23:23:42.143041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:01.525 [2024-12-09 23:23:42.143070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:01.525 [2024-12-09 23:23:42.143161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:01.525 [2024-12-09 23:23:42.143224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:01.525 [2024-12-09 23:23:42.143240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.525 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:42:01.526 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:01.526 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.526 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.787 [2024-12-09 23:23:42.278888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:42:01.787 [2024-12-09 23:23:42.281306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:42:01.787 [2024-12-09 23:23:42.281508] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:42:01.787 [2024-12-09 23:23:42.281691] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:42:01.787 [2024-12-09 23:23:42.281876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:01.787 [2024-12-09 23:23:42.281918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:42:01.787 request: 00:42:01.787 { 00:42:01.787 "name": "raid_bdev1", 00:42:01.787 "raid_level": "raid1", 00:42:01.787 "base_bdevs": [ 00:42:01.787 "malloc1", 00:42:01.787 "malloc2" 00:42:01.787 ], 00:42:01.787 "superblock": false, 00:42:01.787 "method": "bdev_raid_create", 00:42:01.787 "req_id": 1 00:42:01.787 } 00:42:01.787 Got JSON-RPC error response 00:42:01.787 response: 00:42:01.787 { 00:42:01.787 "code": -17, 00:42:01.787 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:42:01.787 } 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.787 [2024-12-09 23:23:42.346787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:01.787 [2024-12-09 23:23:42.346853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:01.787 [2024-12-09 23:23:42.346876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:42:01.787 [2024-12-09 23:23:42.346891] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:01.787 [2024-12-09 23:23:42.349529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:01.787 [2024-12-09 23:23:42.349574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:01.787 [2024-12-09 23:23:42.349663] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:42:01.787 [2024-12-09 23:23:42.349725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:01.787 pt1 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:01.787 "name": "raid_bdev1", 00:42:01.787 "uuid": "c92de365-3eb6-4354-9428-869cc6aee4f4", 00:42:01.787 "strip_size_kb": 0, 00:42:01.787 "state": "configuring", 00:42:01.787 "raid_level": "raid1", 00:42:01.787 "superblock": true, 00:42:01.787 "num_base_bdevs": 2, 00:42:01.787 "num_base_bdevs_discovered": 1, 00:42:01.787 "num_base_bdevs_operational": 2, 00:42:01.787 "base_bdevs_list": [ 00:42:01.787 { 00:42:01.787 "name": "pt1", 00:42:01.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:01.787 "is_configured": true, 00:42:01.787 "data_offset": 256, 00:42:01.787 "data_size": 7936 00:42:01.787 }, 00:42:01.787 { 00:42:01.787 "name": null, 00:42:01.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:01.787 "is_configured": false, 00:42:01.787 "data_offset": 256, 00:42:01.787 "data_size": 7936 00:42:01.787 } 00:42:01.787 ] 00:42:01.787 }' 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:01.787 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:02.378 [2024-12-09 23:23:42.778581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:02.378 [2024-12-09 23:23:42.778658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:02.378 [2024-12-09 23:23:42.778683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:42:02.378 [2024-12-09 23:23:42.778697] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:02.378 [2024-12-09 23:23:42.779148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:02.378 [2024-12-09 23:23:42.779172] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:02.378 [2024-12-09 23:23:42.779252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:42:02.378 [2024-12-09 23:23:42.779281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:02.378 [2024-12-09 23:23:42.779418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:42:02.378 [2024-12-09 23:23:42.779433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:02.378 [2024-12-09 23:23:42.779681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:42:02.378 [2024-12-09 23:23:42.779816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:42:02.378 [2024-12-09 23:23:42.779825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:42:02.378 [2024-12-09 23:23:42.779974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:02.378 pt2 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:02.378 "name": "raid_bdev1", 00:42:02.378 "uuid": "c92de365-3eb6-4354-9428-869cc6aee4f4", 00:42:02.378 "strip_size_kb": 0, 00:42:02.378 "state": "online", 00:42:02.378 "raid_level": "raid1", 00:42:02.378 "superblock": true, 00:42:02.378 "num_base_bdevs": 2, 00:42:02.378 "num_base_bdevs_discovered": 2, 00:42:02.378 "num_base_bdevs_operational": 2, 00:42:02.378 "base_bdevs_list": [ 00:42:02.378 { 00:42:02.378 "name": "pt1", 00:42:02.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:02.378 "is_configured": true, 00:42:02.378 "data_offset": 256, 00:42:02.378 "data_size": 7936 00:42:02.378 }, 00:42:02.378 { 00:42:02.378 "name": "pt2", 00:42:02.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:02.378 "is_configured": true, 00:42:02.378 "data_offset": 256, 00:42:02.378 "data_size": 7936 00:42:02.378 } 00:42:02.378 ] 00:42:02.378 }' 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:02.378 23:23:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:02.638 [2024-12-09 23:23:43.174272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:02.638 "name": "raid_bdev1", 00:42:02.638 "aliases": [ 00:42:02.638 "c92de365-3eb6-4354-9428-869cc6aee4f4" 00:42:02.638 ], 00:42:02.638 "product_name": "Raid Volume", 00:42:02.638 "block_size": 4096, 00:42:02.638 "num_blocks": 7936, 00:42:02.638 "uuid": "c92de365-3eb6-4354-9428-869cc6aee4f4", 00:42:02.638 "assigned_rate_limits": { 00:42:02.638 "rw_ios_per_sec": 0, 00:42:02.638 "rw_mbytes_per_sec": 0, 00:42:02.638 "r_mbytes_per_sec": 0, 00:42:02.638 "w_mbytes_per_sec": 0 00:42:02.638 }, 00:42:02.638 "claimed": false, 00:42:02.638 "zoned": false, 00:42:02.638 "supported_io_types": { 00:42:02.638 "read": true, 00:42:02.638 "write": true, 00:42:02.638 "unmap": false, 00:42:02.638 "flush": false, 00:42:02.638 "reset": true, 00:42:02.638 "nvme_admin": false, 00:42:02.638 "nvme_io": false, 00:42:02.638 "nvme_io_md": false, 00:42:02.638 "write_zeroes": true, 00:42:02.638 "zcopy": false, 00:42:02.638 "get_zone_info": false, 00:42:02.638 "zone_management": false, 00:42:02.638 "zone_append": false, 00:42:02.638 "compare": false, 00:42:02.638 "compare_and_write": false, 00:42:02.638 "abort": false, 00:42:02.638 "seek_hole": false, 00:42:02.638 "seek_data": false, 00:42:02.638 "copy": false, 00:42:02.638 "nvme_iov_md": false 00:42:02.638 }, 00:42:02.638 "memory_domains": [ 00:42:02.638 { 00:42:02.638 "dma_device_id": "system", 00:42:02.638 "dma_device_type": 1 00:42:02.638 }, 00:42:02.638 { 00:42:02.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:02.638 "dma_device_type": 2 00:42:02.638 }, 00:42:02.638 { 00:42:02.638 "dma_device_id": "system", 00:42:02.638 "dma_device_type": 1 00:42:02.638 }, 00:42:02.638 { 00:42:02.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:02.638 "dma_device_type": 2 00:42:02.638 } 00:42:02.638 ], 00:42:02.638 "driver_specific": { 00:42:02.638 "raid": { 00:42:02.638 "uuid": "c92de365-3eb6-4354-9428-869cc6aee4f4", 00:42:02.638 "strip_size_kb": 0, 00:42:02.638 "state": "online", 00:42:02.638 "raid_level": "raid1", 00:42:02.638 "superblock": true, 00:42:02.638 "num_base_bdevs": 2, 00:42:02.638 "num_base_bdevs_discovered": 2, 00:42:02.638 "num_base_bdevs_operational": 2, 00:42:02.638 "base_bdevs_list": [ 00:42:02.638 { 00:42:02.638 "name": "pt1", 00:42:02.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:02.638 "is_configured": true, 00:42:02.638 "data_offset": 256, 00:42:02.638 "data_size": 7936 00:42:02.638 }, 00:42:02.638 { 00:42:02.638 "name": "pt2", 00:42:02.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:02.638 "is_configured": true, 00:42:02.638 "data_offset": 256, 00:42:02.638 "data_size": 7936 00:42:02.638 } 00:42:02.638 ] 00:42:02.638 } 00:42:02.638 } 00:42:02.638 }' 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:42:02.638 pt2' 00:42:02.638 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:42:02.898 [2024-12-09 23:23:43.401898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' c92de365-3eb6-4354-9428-869cc6aee4f4 '!=' c92de365-3eb6-4354-9428-869cc6aee4f4 ']' 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:02.898 [2024-12-09 23:23:43.433674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:02.898 "name": "raid_bdev1", 00:42:02.898 "uuid": "c92de365-3eb6-4354-9428-869cc6aee4f4", 00:42:02.898 "strip_size_kb": 0, 00:42:02.898 "state": "online", 00:42:02.898 "raid_level": "raid1", 00:42:02.898 "superblock": true, 00:42:02.898 "num_base_bdevs": 2, 00:42:02.898 "num_base_bdevs_discovered": 1, 00:42:02.898 "num_base_bdevs_operational": 1, 00:42:02.898 "base_bdevs_list": [ 00:42:02.898 { 00:42:02.898 "name": null, 00:42:02.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:02.898 "is_configured": false, 00:42:02.898 "data_offset": 0, 00:42:02.898 "data_size": 7936 00:42:02.898 }, 00:42:02.898 { 00:42:02.898 "name": "pt2", 00:42:02.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:02.898 "is_configured": true, 00:42:02.898 "data_offset": 256, 00:42:02.898 "data_size": 7936 00:42:02.898 } 00:42:02.898 ] 00:42:02.898 }' 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:02.898 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.467 [2024-12-09 23:23:43.837548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:03.467 [2024-12-09 23:23:43.837701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:03.467 [2024-12-09 23:23:43.837805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:03.467 [2024-12-09 23:23:43.837854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:03.467 [2024-12-09 23:23:43.837869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.467 [2024-12-09 23:23:43.905533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:03.467 [2024-12-09 23:23:43.905736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:03.467 [2024-12-09 23:23:43.905767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:42:03.467 [2024-12-09 23:23:43.905782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:03.467 [2024-12-09 23:23:43.908359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:03.467 [2024-12-09 23:23:43.908415] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:03.467 [2024-12-09 23:23:43.908509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:42:03.467 [2024-12-09 23:23:43.908560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:03.467 [2024-12-09 23:23:43.908670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:42:03.467 [2024-12-09 23:23:43.908685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:03.467 [2024-12-09 23:23:43.908946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:42:03.467 [2024-12-09 23:23:43.909102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:42:03.467 [2024-12-09 23:23:43.909114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:42:03.467 [2024-12-09 23:23:43.909278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:03.467 pt2 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:03.467 "name": "raid_bdev1", 00:42:03.467 "uuid": "c92de365-3eb6-4354-9428-869cc6aee4f4", 00:42:03.467 "strip_size_kb": 0, 00:42:03.467 "state": "online", 00:42:03.467 "raid_level": "raid1", 00:42:03.467 "superblock": true, 00:42:03.467 "num_base_bdevs": 2, 00:42:03.467 "num_base_bdevs_discovered": 1, 00:42:03.467 "num_base_bdevs_operational": 1, 00:42:03.467 "base_bdevs_list": [ 00:42:03.467 { 00:42:03.467 "name": null, 00:42:03.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:03.467 "is_configured": false, 00:42:03.467 "data_offset": 256, 00:42:03.467 "data_size": 7936 00:42:03.467 }, 00:42:03.467 { 00:42:03.467 "name": "pt2", 00:42:03.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:03.467 "is_configured": true, 00:42:03.467 "data_offset": 256, 00:42:03.467 "data_size": 7936 00:42:03.467 } 00:42:03.467 ] 00:42:03.467 }' 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:03.467 23:23:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.726 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.986 [2024-12-09 23:23:44.368879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:03.986 [2024-12-09 23:23:44.369059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:03.986 [2024-12-09 23:23:44.369232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:03.986 [2024-12-09 23:23:44.369329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:03.986 [2024-12-09 23:23:44.369582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.986 [2024-12-09 23:23:44.428797] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:03.986 [2024-12-09 23:23:44.428879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:03.986 [2024-12-09 23:23:44.428912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:42:03.986 [2024-12-09 23:23:44.428938] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:03.986 [2024-12-09 23:23:44.431634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:03.986 [2024-12-09 23:23:44.431676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:03.986 [2024-12-09 23:23:44.431766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:42:03.986 [2024-12-09 23:23:44.431822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:03.986 [2024-12-09 23:23:44.431980] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:42:03.986 [2024-12-09 23:23:44.431994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:03.986 [2024-12-09 23:23:44.432012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:42:03.986 [2024-12-09 23:23:44.432075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:03.986 [2024-12-09 23:23:44.432148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:42:03.986 [2024-12-09 23:23:44.432158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:03.986 [2024-12-09 23:23:44.432447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:42:03.986 [2024-12-09 23:23:44.432642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:42:03.986 [2024-12-09 23:23:44.432659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:42:03.986 [2024-12-09 23:23:44.432864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:03.986 pt1 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:03.986 "name": "raid_bdev1", 00:42:03.986 "uuid": "c92de365-3eb6-4354-9428-869cc6aee4f4", 00:42:03.986 "strip_size_kb": 0, 00:42:03.986 "state": "online", 00:42:03.986 "raid_level": "raid1", 00:42:03.986 "superblock": true, 00:42:03.986 "num_base_bdevs": 2, 00:42:03.986 "num_base_bdevs_discovered": 1, 00:42:03.986 "num_base_bdevs_operational": 1, 00:42:03.986 "base_bdevs_list": [ 00:42:03.986 { 00:42:03.986 "name": null, 00:42:03.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:03.986 "is_configured": false, 00:42:03.986 "data_offset": 256, 00:42:03.986 "data_size": 7936 00:42:03.986 }, 00:42:03.986 { 00:42:03.986 "name": "pt2", 00:42:03.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:03.986 "is_configured": true, 00:42:03.986 "data_offset": 256, 00:42:03.986 "data_size": 7936 00:42:03.986 } 00:42:03.986 ] 00:42:03.986 }' 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:03.986 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:04.246 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:42:04.246 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:42:04.246 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:04.246 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:04.246 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:04.246 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:42:04.246 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:42:04.246 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:04.246 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:04.246 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:04.505 [2024-12-09 23:23:44.884508] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' c92de365-3eb6-4354-9428-869cc6aee4f4 '!=' c92de365-3eb6-4354-9428-869cc6aee4f4 ']' 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86054 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86054 ']' 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86054 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86054 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:04.505 killing process with pid 86054 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86054' 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86054 00:42:04.505 [2024-12-09 23:23:44.963379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:04.505 [2024-12-09 23:23:44.963488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:04.505 [2024-12-09 23:23:44.963537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:04.505 [2024-12-09 23:23:44.963555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:42:04.505 23:23:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86054 00:42:04.763 [2024-12-09 23:23:45.175089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:05.699 23:23:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:42:05.699 ************************************ 00:42:05.699 END TEST raid_superblock_test_4k 00:42:05.699 ************************************ 00:42:05.699 00:42:05.699 real 0m6.003s 00:42:05.699 user 0m8.987s 00:42:05.699 sys 0m1.274s 00:42:05.699 23:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:05.699 23:23:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:05.959 23:23:46 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:42:05.959 23:23:46 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:42:05.959 23:23:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:42:05.959 23:23:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:05.959 23:23:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:05.959 ************************************ 00:42:05.959 START TEST raid_rebuild_test_sb_4k 00:42:05.959 ************************************ 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86381 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86381 00:42:05.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86381 ']' 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:05.959 23:23:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:05.959 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:05.959 Zero copy mechanism will not be used. 00:42:05.959 [2024-12-09 23:23:46.511767] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:42:05.959 [2024-12-09 23:23:46.511893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86381 ] 00:42:06.217 [2024-12-09 23:23:46.692466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:06.217 [2024-12-09 23:23:46.806689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:06.527 [2024-12-09 23:23:47.018950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:06.527 [2024-12-09 23:23:47.018994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:06.787 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:06.787 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:42:06.787 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:06.787 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:42:06.787 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.787 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:06.787 BaseBdev1_malloc 00:42:06.787 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.787 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:06.787 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.787 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.046 [2024-12-09 23:23:47.425926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:07.047 [2024-12-09 23:23:47.425992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:07.047 [2024-12-09 23:23:47.426017] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:42:07.047 [2024-12-09 23:23:47.426039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:07.047 [2024-12-09 23:23:47.428491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:07.047 [2024-12-09 23:23:47.428529] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:07.047 BaseBdev1 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.047 BaseBdev2_malloc 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.047 [2024-12-09 23:23:47.483856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:42:07.047 [2024-12-09 23:23:47.483922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:07.047 [2024-12-09 23:23:47.483944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:42:07.047 [2024-12-09 23:23:47.483960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:07.047 [2024-12-09 23:23:47.486372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:07.047 [2024-12-09 23:23:47.486421] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:42:07.047 BaseBdev2 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.047 spare_malloc 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.047 spare_delay 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.047 [2024-12-09 23:23:47.565122] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:07.047 [2024-12-09 23:23:47.565181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:07.047 [2024-12-09 23:23:47.565202] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:42:07.047 [2024-12-09 23:23:47.565216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:07.047 [2024-12-09 23:23:47.567591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:07.047 [2024-12-09 23:23:47.567629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:07.047 spare 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.047 [2024-12-09 23:23:47.577171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:07.047 [2024-12-09 23:23:47.579235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:07.047 [2024-12-09 23:23:47.579440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:42:07.047 [2024-12-09 23:23:47.579459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:07.047 [2024-12-09 23:23:47.579717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:42:07.047 [2024-12-09 23:23:47.579886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:42:07.047 [2024-12-09 23:23:47.579906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:42:07.047 [2024-12-09 23:23:47.580064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:07.047 "name": "raid_bdev1", 00:42:07.047 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:07.047 "strip_size_kb": 0, 00:42:07.047 "state": "online", 00:42:07.047 "raid_level": "raid1", 00:42:07.047 "superblock": true, 00:42:07.047 "num_base_bdevs": 2, 00:42:07.047 "num_base_bdevs_discovered": 2, 00:42:07.047 "num_base_bdevs_operational": 2, 00:42:07.047 "base_bdevs_list": [ 00:42:07.047 { 00:42:07.047 "name": "BaseBdev1", 00:42:07.047 "uuid": "ced5b5f5-5714-53bf-abb9-8aeb7eebc96e", 00:42:07.047 "is_configured": true, 00:42:07.047 "data_offset": 256, 00:42:07.047 "data_size": 7936 00:42:07.047 }, 00:42:07.047 { 00:42:07.047 "name": "BaseBdev2", 00:42:07.047 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:07.047 "is_configured": true, 00:42:07.047 "data_offset": 256, 00:42:07.047 "data_size": 7936 00:42:07.047 } 00:42:07.047 ] 00:42:07.047 }' 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:07.047 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.613 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:42:07.613 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:07.613 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.613 23:23:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.613 [2024-12-09 23:23:47.976850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:07.613 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:07.614 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:42:07.872 [2024-12-09 23:23:48.256316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:42:07.872 /dev/nbd0 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:07.872 1+0 records in 00:42:07.872 1+0 records out 00:42:07.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350848 s, 11.7 MB/s 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:42:07.872 23:23:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:42:08.808 7936+0 records in 00:42:08.808 7936+0 records out 00:42:08.808 32505856 bytes (33 MB, 31 MiB) copied, 0.810073 s, 40.1 MB/s 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:08.808 [2024-12-09 23:23:49.365547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:08.808 [2024-12-09 23:23:49.381632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:08.808 "name": "raid_bdev1", 00:42:08.808 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:08.808 "strip_size_kb": 0, 00:42:08.808 "state": "online", 00:42:08.808 "raid_level": "raid1", 00:42:08.808 "superblock": true, 00:42:08.808 "num_base_bdevs": 2, 00:42:08.808 "num_base_bdevs_discovered": 1, 00:42:08.808 "num_base_bdevs_operational": 1, 00:42:08.808 "base_bdevs_list": [ 00:42:08.808 { 00:42:08.808 "name": null, 00:42:08.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:08.808 "is_configured": false, 00:42:08.808 "data_offset": 0, 00:42:08.808 "data_size": 7936 00:42:08.808 }, 00:42:08.808 { 00:42:08.808 "name": "BaseBdev2", 00:42:08.808 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:08.808 "is_configured": true, 00:42:08.808 "data_offset": 256, 00:42:08.808 "data_size": 7936 00:42:08.808 } 00:42:08.808 ] 00:42:08.808 }' 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:08.808 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:09.376 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:09.376 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.376 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:09.376 [2024-12-09 23:23:49.805673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:09.376 [2024-12-09 23:23:49.824644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:42:09.376 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.376 23:23:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:42:09.376 [2024-12-09 23:23:49.827143] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:10.311 "name": "raid_bdev1", 00:42:10.311 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:10.311 "strip_size_kb": 0, 00:42:10.311 "state": "online", 00:42:10.311 "raid_level": "raid1", 00:42:10.311 "superblock": true, 00:42:10.311 "num_base_bdevs": 2, 00:42:10.311 "num_base_bdevs_discovered": 2, 00:42:10.311 "num_base_bdevs_operational": 2, 00:42:10.311 "process": { 00:42:10.311 "type": "rebuild", 00:42:10.311 "target": "spare", 00:42:10.311 "progress": { 00:42:10.311 "blocks": 2560, 00:42:10.311 "percent": 32 00:42:10.311 } 00:42:10.311 }, 00:42:10.311 "base_bdevs_list": [ 00:42:10.311 { 00:42:10.311 "name": "spare", 00:42:10.311 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:10.311 "is_configured": true, 00:42:10.311 "data_offset": 256, 00:42:10.311 "data_size": 7936 00:42:10.311 }, 00:42:10.311 { 00:42:10.311 "name": "BaseBdev2", 00:42:10.311 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:10.311 "is_configured": true, 00:42:10.311 "data_offset": 256, 00:42:10.311 "data_size": 7936 00:42:10.311 } 00:42:10.311 ] 00:42:10.311 }' 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:10.311 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:10.570 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:10.570 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:10.570 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.570 23:23:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:10.570 [2024-12-09 23:23:50.955080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:10.570 [2024-12-09 23:23:51.037738] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:10.570 [2024-12-09 23:23:51.037834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:10.570 [2024-12-09 23:23:51.037855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:10.570 [2024-12-09 23:23:51.037871] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:10.570 "name": "raid_bdev1", 00:42:10.570 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:10.570 "strip_size_kb": 0, 00:42:10.570 "state": "online", 00:42:10.570 "raid_level": "raid1", 00:42:10.570 "superblock": true, 00:42:10.570 "num_base_bdevs": 2, 00:42:10.570 "num_base_bdevs_discovered": 1, 00:42:10.570 "num_base_bdevs_operational": 1, 00:42:10.570 "base_bdevs_list": [ 00:42:10.570 { 00:42:10.570 "name": null, 00:42:10.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:10.570 "is_configured": false, 00:42:10.570 "data_offset": 0, 00:42:10.570 "data_size": 7936 00:42:10.570 }, 00:42:10.570 { 00:42:10.570 "name": "BaseBdev2", 00:42:10.570 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:10.570 "is_configured": true, 00:42:10.570 "data_offset": 256, 00:42:10.570 "data_size": 7936 00:42:10.570 } 00:42:10.570 ] 00:42:10.570 }' 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:10.570 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:11.137 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:11.137 "name": "raid_bdev1", 00:42:11.137 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:11.137 "strip_size_kb": 0, 00:42:11.137 "state": "online", 00:42:11.137 "raid_level": "raid1", 00:42:11.137 "superblock": true, 00:42:11.137 "num_base_bdevs": 2, 00:42:11.137 "num_base_bdevs_discovered": 1, 00:42:11.137 "num_base_bdevs_operational": 1, 00:42:11.137 "base_bdevs_list": [ 00:42:11.138 { 00:42:11.138 "name": null, 00:42:11.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:11.138 "is_configured": false, 00:42:11.138 "data_offset": 0, 00:42:11.138 "data_size": 7936 00:42:11.138 }, 00:42:11.138 { 00:42:11.138 "name": "BaseBdev2", 00:42:11.138 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:11.138 "is_configured": true, 00:42:11.138 "data_offset": 256, 00:42:11.138 "data_size": 7936 00:42:11.138 } 00:42:11.138 ] 00:42:11.138 }' 00:42:11.138 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:11.138 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:11.138 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:11.138 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:11.138 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:11.138 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:11.138 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:11.138 [2024-12-09 23:23:51.624664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:11.138 [2024-12-09 23:23:51.643383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:42:11.138 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:11.138 23:23:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:42:11.138 [2024-12-09 23:23:51.645932] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.072 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:12.072 "name": "raid_bdev1", 00:42:12.072 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:12.072 "strip_size_kb": 0, 00:42:12.072 "state": "online", 00:42:12.072 "raid_level": "raid1", 00:42:12.072 "superblock": true, 00:42:12.072 "num_base_bdevs": 2, 00:42:12.072 "num_base_bdevs_discovered": 2, 00:42:12.072 "num_base_bdevs_operational": 2, 00:42:12.072 "process": { 00:42:12.072 "type": "rebuild", 00:42:12.072 "target": "spare", 00:42:12.072 "progress": { 00:42:12.072 "blocks": 2560, 00:42:12.072 "percent": 32 00:42:12.072 } 00:42:12.072 }, 00:42:12.073 "base_bdevs_list": [ 00:42:12.073 { 00:42:12.073 "name": "spare", 00:42:12.073 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:12.073 "is_configured": true, 00:42:12.073 "data_offset": 256, 00:42:12.073 "data_size": 7936 00:42:12.073 }, 00:42:12.073 { 00:42:12.073 "name": "BaseBdev2", 00:42:12.073 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:12.073 "is_configured": true, 00:42:12.073 "data_offset": 256, 00:42:12.073 "data_size": 7936 00:42:12.073 } 00:42:12.073 ] 00:42:12.073 }' 00:42:12.073 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:42:12.331 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=682 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:12.331 "name": "raid_bdev1", 00:42:12.331 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:12.331 "strip_size_kb": 0, 00:42:12.331 "state": "online", 00:42:12.331 "raid_level": "raid1", 00:42:12.331 "superblock": true, 00:42:12.331 "num_base_bdevs": 2, 00:42:12.331 "num_base_bdevs_discovered": 2, 00:42:12.331 "num_base_bdevs_operational": 2, 00:42:12.331 "process": { 00:42:12.331 "type": "rebuild", 00:42:12.331 "target": "spare", 00:42:12.331 "progress": { 00:42:12.331 "blocks": 2816, 00:42:12.331 "percent": 35 00:42:12.331 } 00:42:12.331 }, 00:42:12.331 "base_bdevs_list": [ 00:42:12.331 { 00:42:12.331 "name": "spare", 00:42:12.331 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:12.331 "is_configured": true, 00:42:12.331 "data_offset": 256, 00:42:12.331 "data_size": 7936 00:42:12.331 }, 00:42:12.331 { 00:42:12.331 "name": "BaseBdev2", 00:42:12.331 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:12.331 "is_configured": true, 00:42:12.331 "data_offset": 256, 00:42:12.331 "data_size": 7936 00:42:12.331 } 00:42:12.331 ] 00:42:12.331 }' 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:12.331 23:23:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:13.709 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:13.709 "name": "raid_bdev1", 00:42:13.709 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:13.709 "strip_size_kb": 0, 00:42:13.709 "state": "online", 00:42:13.709 "raid_level": "raid1", 00:42:13.709 "superblock": true, 00:42:13.709 "num_base_bdevs": 2, 00:42:13.709 "num_base_bdevs_discovered": 2, 00:42:13.709 "num_base_bdevs_operational": 2, 00:42:13.709 "process": { 00:42:13.709 "type": "rebuild", 00:42:13.709 "target": "spare", 00:42:13.709 "progress": { 00:42:13.709 "blocks": 5632, 00:42:13.709 "percent": 70 00:42:13.709 } 00:42:13.709 }, 00:42:13.709 "base_bdevs_list": [ 00:42:13.709 { 00:42:13.709 "name": "spare", 00:42:13.709 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:13.709 "is_configured": true, 00:42:13.709 "data_offset": 256, 00:42:13.709 "data_size": 7936 00:42:13.709 }, 00:42:13.709 { 00:42:13.709 "name": "BaseBdev2", 00:42:13.709 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:13.709 "is_configured": true, 00:42:13.710 "data_offset": 256, 00:42:13.710 "data_size": 7936 00:42:13.710 } 00:42:13.710 ] 00:42:13.710 }' 00:42:13.710 23:23:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:13.710 23:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:13.710 23:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:13.710 23:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:13.710 23:23:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:14.277 [2024-12-09 23:23:54.771380] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:14.277 [2024-12-09 23:23:54.771533] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:14.277 [2024-12-09 23:23:54.771723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:14.537 "name": "raid_bdev1", 00:42:14.537 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:14.537 "strip_size_kb": 0, 00:42:14.537 "state": "online", 00:42:14.537 "raid_level": "raid1", 00:42:14.537 "superblock": true, 00:42:14.537 "num_base_bdevs": 2, 00:42:14.537 "num_base_bdevs_discovered": 2, 00:42:14.537 "num_base_bdevs_operational": 2, 00:42:14.537 "base_bdevs_list": [ 00:42:14.537 { 00:42:14.537 "name": "spare", 00:42:14.537 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:14.537 "is_configured": true, 00:42:14.537 "data_offset": 256, 00:42:14.537 "data_size": 7936 00:42:14.537 }, 00:42:14.537 { 00:42:14.537 "name": "BaseBdev2", 00:42:14.537 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:14.537 "is_configured": true, 00:42:14.537 "data_offset": 256, 00:42:14.537 "data_size": 7936 00:42:14.537 } 00:42:14.537 ] 00:42:14.537 }' 00:42:14.537 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:14.796 "name": "raid_bdev1", 00:42:14.796 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:14.796 "strip_size_kb": 0, 00:42:14.796 "state": "online", 00:42:14.796 "raid_level": "raid1", 00:42:14.796 "superblock": true, 00:42:14.796 "num_base_bdevs": 2, 00:42:14.796 "num_base_bdevs_discovered": 2, 00:42:14.796 "num_base_bdevs_operational": 2, 00:42:14.796 "base_bdevs_list": [ 00:42:14.796 { 00:42:14.796 "name": "spare", 00:42:14.796 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:14.796 "is_configured": true, 00:42:14.796 "data_offset": 256, 00:42:14.796 "data_size": 7936 00:42:14.796 }, 00:42:14.796 { 00:42:14.796 "name": "BaseBdev2", 00:42:14.796 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:14.796 "is_configured": true, 00:42:14.796 "data_offset": 256, 00:42:14.796 "data_size": 7936 00:42:14.796 } 00:42:14.796 ] 00:42:14.796 }' 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:14.796 "name": "raid_bdev1", 00:42:14.796 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:14.796 "strip_size_kb": 0, 00:42:14.796 "state": "online", 00:42:14.796 "raid_level": "raid1", 00:42:14.796 "superblock": true, 00:42:14.796 "num_base_bdevs": 2, 00:42:14.796 "num_base_bdevs_discovered": 2, 00:42:14.796 "num_base_bdevs_operational": 2, 00:42:14.796 "base_bdevs_list": [ 00:42:14.796 { 00:42:14.796 "name": "spare", 00:42:14.796 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:14.796 "is_configured": true, 00:42:14.796 "data_offset": 256, 00:42:14.796 "data_size": 7936 00:42:14.796 }, 00:42:14.796 { 00:42:14.796 "name": "BaseBdev2", 00:42:14.796 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:14.796 "is_configured": true, 00:42:14.796 "data_offset": 256, 00:42:14.796 "data_size": 7936 00:42:14.796 } 00:42:14.796 ] 00:42:14.796 }' 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:14.796 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:15.364 [2024-12-09 23:23:55.826697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:15.364 [2024-12-09 23:23:55.826760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:15.364 [2024-12-09 23:23:55.826884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:15.364 [2024-12-09 23:23:55.826978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:15.364 [2024-12-09 23:23:55.826996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:15.364 23:23:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:42:15.624 /dev/nbd0 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:15.624 1+0 records in 00:42:15.624 1+0 records out 00:42:15.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292566 s, 14.0 MB/s 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:15.624 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:42:15.884 /dev/nbd1 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:15.884 1+0 records in 00:42:15.884 1+0 records out 00:42:15.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452057 s, 9.1 MB/s 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:15.884 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:42:16.143 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:42:16.143 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:16.143 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:16.143 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:16.143 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:42:16.143 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:16.143 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:16.402 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:16.402 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:16.402 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:16.402 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:16.402 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:16.402 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:16.402 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:42:16.402 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:42:16.402 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:16.402 23:23:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:16.662 [2024-12-09 23:23:57.225178] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:16.662 [2024-12-09 23:23:57.225241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:16.662 [2024-12-09 23:23:57.225268] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:42:16.662 [2024-12-09 23:23:57.225280] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:16.662 [2024-12-09 23:23:57.227814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:16.662 [2024-12-09 23:23:57.227852] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:16.662 [2024-12-09 23:23:57.227950] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:16.662 [2024-12-09 23:23:57.228004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:16.662 [2024-12-09 23:23:57.228165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:16.662 spare 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.662 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:16.921 [2024-12-09 23:23:57.328100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:42:16.921 [2024-12-09 23:23:57.328162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:16.921 [2024-12-09 23:23:57.328529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:42:16.921 [2024-12-09 23:23:57.328749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:42:16.921 [2024-12-09 23:23:57.328762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:42:16.921 [2024-12-09 23:23:57.328949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:16.921 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.921 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:16.921 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:16.921 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:16.922 "name": "raid_bdev1", 00:42:16.922 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:16.922 "strip_size_kb": 0, 00:42:16.922 "state": "online", 00:42:16.922 "raid_level": "raid1", 00:42:16.922 "superblock": true, 00:42:16.922 "num_base_bdevs": 2, 00:42:16.922 "num_base_bdevs_discovered": 2, 00:42:16.922 "num_base_bdevs_operational": 2, 00:42:16.922 "base_bdevs_list": [ 00:42:16.922 { 00:42:16.922 "name": "spare", 00:42:16.922 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:16.922 "is_configured": true, 00:42:16.922 "data_offset": 256, 00:42:16.922 "data_size": 7936 00:42:16.922 }, 00:42:16.922 { 00:42:16.922 "name": "BaseBdev2", 00:42:16.922 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:16.922 "is_configured": true, 00:42:16.922 "data_offset": 256, 00:42:16.922 "data_size": 7936 00:42:16.922 } 00:42:16.922 ] 00:42:16.922 }' 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:16.922 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:17.181 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:17.181 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:17.181 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:17.181 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:17.181 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:17.181 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:17.181 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.181 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:17.181 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:17.181 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:17.441 "name": "raid_bdev1", 00:42:17.441 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:17.441 "strip_size_kb": 0, 00:42:17.441 "state": "online", 00:42:17.441 "raid_level": "raid1", 00:42:17.441 "superblock": true, 00:42:17.441 "num_base_bdevs": 2, 00:42:17.441 "num_base_bdevs_discovered": 2, 00:42:17.441 "num_base_bdevs_operational": 2, 00:42:17.441 "base_bdevs_list": [ 00:42:17.441 { 00:42:17.441 "name": "spare", 00:42:17.441 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:17.441 "is_configured": true, 00:42:17.441 "data_offset": 256, 00:42:17.441 "data_size": 7936 00:42:17.441 }, 00:42:17.441 { 00:42:17.441 "name": "BaseBdev2", 00:42:17.441 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:17.441 "is_configured": true, 00:42:17.441 "data_offset": 256, 00:42:17.441 "data_size": 7936 00:42:17.441 } 00:42:17.441 ] 00:42:17.441 }' 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:17.441 [2024-12-09 23:23:57.944236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:17.441 "name": "raid_bdev1", 00:42:17.441 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:17.441 "strip_size_kb": 0, 00:42:17.441 "state": "online", 00:42:17.441 "raid_level": "raid1", 00:42:17.441 "superblock": true, 00:42:17.441 "num_base_bdevs": 2, 00:42:17.441 "num_base_bdevs_discovered": 1, 00:42:17.441 "num_base_bdevs_operational": 1, 00:42:17.441 "base_bdevs_list": [ 00:42:17.441 { 00:42:17.441 "name": null, 00:42:17.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:17.441 "is_configured": false, 00:42:17.441 "data_offset": 0, 00:42:17.441 "data_size": 7936 00:42:17.441 }, 00:42:17.441 { 00:42:17.441 "name": "BaseBdev2", 00:42:17.441 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:17.441 "is_configured": true, 00:42:17.441 "data_offset": 256, 00:42:17.441 "data_size": 7936 00:42:17.441 } 00:42:17.441 ] 00:42:17.441 }' 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:17.441 23:23:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:18.009 23:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:18.009 23:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.009 23:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:18.009 [2024-12-09 23:23:58.351681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:18.009 [2024-12-09 23:23:58.351889] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:42:18.009 [2024-12-09 23:23:58.351913] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:18.009 [2024-12-09 23:23:58.351946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:18.009 [2024-12-09 23:23:58.368590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:42:18.009 23:23:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.009 23:23:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:42:18.009 [2024-12-09 23:23:58.370688] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:18.945 "name": "raid_bdev1", 00:42:18.945 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:18.945 "strip_size_kb": 0, 00:42:18.945 "state": "online", 00:42:18.945 "raid_level": "raid1", 00:42:18.945 "superblock": true, 00:42:18.945 "num_base_bdevs": 2, 00:42:18.945 "num_base_bdevs_discovered": 2, 00:42:18.945 "num_base_bdevs_operational": 2, 00:42:18.945 "process": { 00:42:18.945 "type": "rebuild", 00:42:18.945 "target": "spare", 00:42:18.945 "progress": { 00:42:18.945 "blocks": 2560, 00:42:18.945 "percent": 32 00:42:18.945 } 00:42:18.945 }, 00:42:18.945 "base_bdevs_list": [ 00:42:18.945 { 00:42:18.945 "name": "spare", 00:42:18.945 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:18.945 "is_configured": true, 00:42:18.945 "data_offset": 256, 00:42:18.945 "data_size": 7936 00:42:18.945 }, 00:42:18.945 { 00:42:18.945 "name": "BaseBdev2", 00:42:18.945 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:18.945 "is_configured": true, 00:42:18.945 "data_offset": 256, 00:42:18.945 "data_size": 7936 00:42:18.945 } 00:42:18.945 ] 00:42:18.945 }' 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:18.945 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:18.946 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:18.946 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:42:18.946 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.946 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:18.946 [2024-12-09 23:23:59.514679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:18.946 [2024-12-09 23:23:59.577009] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:18.946 [2024-12-09 23:23:59.577206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:18.946 [2024-12-09 23:23:59.577348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:18.946 [2024-12-09 23:23:59.577371] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:19.204 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.204 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:19.204 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:19.204 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:19.205 "name": "raid_bdev1", 00:42:19.205 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:19.205 "strip_size_kb": 0, 00:42:19.205 "state": "online", 00:42:19.205 "raid_level": "raid1", 00:42:19.205 "superblock": true, 00:42:19.205 "num_base_bdevs": 2, 00:42:19.205 "num_base_bdevs_discovered": 1, 00:42:19.205 "num_base_bdevs_operational": 1, 00:42:19.205 "base_bdevs_list": [ 00:42:19.205 { 00:42:19.205 "name": null, 00:42:19.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:19.205 "is_configured": false, 00:42:19.205 "data_offset": 0, 00:42:19.205 "data_size": 7936 00:42:19.205 }, 00:42:19.205 { 00:42:19.205 "name": "BaseBdev2", 00:42:19.205 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:19.205 "is_configured": true, 00:42:19.205 "data_offset": 256, 00:42:19.205 "data_size": 7936 00:42:19.205 } 00:42:19.205 ] 00:42:19.205 }' 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:19.205 23:23:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:19.464 23:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:19.464 23:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.464 23:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:19.464 [2024-12-09 23:24:00.032522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:19.464 [2024-12-09 23:24:00.032592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:19.464 [2024-12-09 23:24:00.032618] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:42:19.464 [2024-12-09 23:24:00.032632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:19.464 [2024-12-09 23:24:00.033118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:19.464 [2024-12-09 23:24:00.033142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:19.464 [2024-12-09 23:24:00.033239] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:19.464 [2024-12-09 23:24:00.033256] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:42:19.464 [2024-12-09 23:24:00.033269] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:19.464 [2024-12-09 23:24:00.033296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:19.464 [2024-12-09 23:24:00.051234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:42:19.464 spare 00:42:19.464 23:24:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.464 23:24:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:42:19.464 [2024-12-09 23:24:00.053427] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:20.445 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:20.445 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:20.445 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:20.445 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:20.445 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:20.445 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:20.445 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:20.445 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.445 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:20.712 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.712 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:20.712 "name": "raid_bdev1", 00:42:20.712 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:20.712 "strip_size_kb": 0, 00:42:20.712 "state": "online", 00:42:20.712 "raid_level": "raid1", 00:42:20.712 "superblock": true, 00:42:20.712 "num_base_bdevs": 2, 00:42:20.712 "num_base_bdevs_discovered": 2, 00:42:20.712 "num_base_bdevs_operational": 2, 00:42:20.712 "process": { 00:42:20.712 "type": "rebuild", 00:42:20.712 "target": "spare", 00:42:20.712 "progress": { 00:42:20.712 "blocks": 2560, 00:42:20.712 "percent": 32 00:42:20.712 } 00:42:20.712 }, 00:42:20.712 "base_bdevs_list": [ 00:42:20.712 { 00:42:20.712 "name": "spare", 00:42:20.712 "uuid": "d9ed9d10-7686-5710-be41-3959bfeef8e4", 00:42:20.712 "is_configured": true, 00:42:20.712 "data_offset": 256, 00:42:20.712 "data_size": 7936 00:42:20.712 }, 00:42:20.712 { 00:42:20.712 "name": "BaseBdev2", 00:42:20.712 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:20.712 "is_configured": true, 00:42:20.712 "data_offset": 256, 00:42:20.712 "data_size": 7936 00:42:20.712 } 00:42:20.712 ] 00:42:20.712 }' 00:42:20.712 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:20.712 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:20.713 [2024-12-09 23:24:01.209553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:20.713 [2024-12-09 23:24:01.258952] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:20.713 [2024-12-09 23:24:01.259187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:20.713 [2024-12-09 23:24:01.259322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:20.713 [2024-12-09 23:24:01.259341] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:20.713 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.971 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:20.971 "name": "raid_bdev1", 00:42:20.971 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:20.971 "strip_size_kb": 0, 00:42:20.971 "state": "online", 00:42:20.971 "raid_level": "raid1", 00:42:20.971 "superblock": true, 00:42:20.971 "num_base_bdevs": 2, 00:42:20.971 "num_base_bdevs_discovered": 1, 00:42:20.971 "num_base_bdevs_operational": 1, 00:42:20.971 "base_bdevs_list": [ 00:42:20.971 { 00:42:20.971 "name": null, 00:42:20.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:20.971 "is_configured": false, 00:42:20.971 "data_offset": 0, 00:42:20.971 "data_size": 7936 00:42:20.971 }, 00:42:20.971 { 00:42:20.971 "name": "BaseBdev2", 00:42:20.971 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:20.971 "is_configured": true, 00:42:20.971 "data_offset": 256, 00:42:20.971 "data_size": 7936 00:42:20.971 } 00:42:20.971 ] 00:42:20.971 }' 00:42:20.971 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:20.971 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:21.230 "name": "raid_bdev1", 00:42:21.230 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:21.230 "strip_size_kb": 0, 00:42:21.230 "state": "online", 00:42:21.230 "raid_level": "raid1", 00:42:21.230 "superblock": true, 00:42:21.230 "num_base_bdevs": 2, 00:42:21.230 "num_base_bdevs_discovered": 1, 00:42:21.230 "num_base_bdevs_operational": 1, 00:42:21.230 "base_bdevs_list": [ 00:42:21.230 { 00:42:21.230 "name": null, 00:42:21.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:21.230 "is_configured": false, 00:42:21.230 "data_offset": 0, 00:42:21.230 "data_size": 7936 00:42:21.230 }, 00:42:21.230 { 00:42:21.230 "name": "BaseBdev2", 00:42:21.230 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:21.230 "is_configured": true, 00:42:21.230 "data_offset": 256, 00:42:21.230 "data_size": 7936 00:42:21.230 } 00:42:21.230 ] 00:42:21.230 }' 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:21.230 [2024-12-09 23:24:01.807000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:21.230 [2024-12-09 23:24:01.807179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:21.230 [2024-12-09 23:24:01.807240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:42:21.230 [2024-12-09 23:24:01.807341] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:21.230 [2024-12-09 23:24:01.807836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:21.230 [2024-12-09 23:24:01.807971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:21.230 [2024-12-09 23:24:01.808079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:42:21.230 [2024-12-09 23:24:01.808096] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:42:21.230 [2024-12-09 23:24:01.808111] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:21.230 [2024-12-09 23:24:01.808122] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:42:21.230 BaseBdev1 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.230 23:24:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:22.608 "name": "raid_bdev1", 00:42:22.608 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:22.608 "strip_size_kb": 0, 00:42:22.608 "state": "online", 00:42:22.608 "raid_level": "raid1", 00:42:22.608 "superblock": true, 00:42:22.608 "num_base_bdevs": 2, 00:42:22.608 "num_base_bdevs_discovered": 1, 00:42:22.608 "num_base_bdevs_operational": 1, 00:42:22.608 "base_bdevs_list": [ 00:42:22.608 { 00:42:22.608 "name": null, 00:42:22.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:22.608 "is_configured": false, 00:42:22.608 "data_offset": 0, 00:42:22.608 "data_size": 7936 00:42:22.608 }, 00:42:22.608 { 00:42:22.608 "name": "BaseBdev2", 00:42:22.608 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:22.608 "is_configured": true, 00:42:22.608 "data_offset": 256, 00:42:22.608 "data_size": 7936 00:42:22.608 } 00:42:22.608 ] 00:42:22.608 }' 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:22.608 23:24:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:22.866 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:22.866 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:22.866 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:22.866 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:22.867 "name": "raid_bdev1", 00:42:22.867 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:22.867 "strip_size_kb": 0, 00:42:22.867 "state": "online", 00:42:22.867 "raid_level": "raid1", 00:42:22.867 "superblock": true, 00:42:22.867 "num_base_bdevs": 2, 00:42:22.867 "num_base_bdevs_discovered": 1, 00:42:22.867 "num_base_bdevs_operational": 1, 00:42:22.867 "base_bdevs_list": [ 00:42:22.867 { 00:42:22.867 "name": null, 00:42:22.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:22.867 "is_configured": false, 00:42:22.867 "data_offset": 0, 00:42:22.867 "data_size": 7936 00:42:22.867 }, 00:42:22.867 { 00:42:22.867 "name": "BaseBdev2", 00:42:22.867 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:22.867 "is_configured": true, 00:42:22.867 "data_offset": 256, 00:42:22.867 "data_size": 7936 00:42:22.867 } 00:42:22.867 ] 00:42:22.867 }' 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:22.867 [2024-12-09 23:24:03.440803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:22.867 [2024-12-09 23:24:03.441100] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:42:22.867 [2024-12-09 23:24:03.441221] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:22.867 request: 00:42:22.867 { 00:42:22.867 "base_bdev": "BaseBdev1", 00:42:22.867 "raid_bdev": "raid_bdev1", 00:42:22.867 "method": "bdev_raid_add_base_bdev", 00:42:22.867 "req_id": 1 00:42:22.867 } 00:42:22.867 Got JSON-RPC error response 00:42:22.867 response: 00:42:22.867 { 00:42:22.867 "code": -22, 00:42:22.867 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:42:22.867 } 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:22.867 23:24:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:24.246 "name": "raid_bdev1", 00:42:24.246 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:24.246 "strip_size_kb": 0, 00:42:24.246 "state": "online", 00:42:24.246 "raid_level": "raid1", 00:42:24.246 "superblock": true, 00:42:24.246 "num_base_bdevs": 2, 00:42:24.246 "num_base_bdevs_discovered": 1, 00:42:24.246 "num_base_bdevs_operational": 1, 00:42:24.246 "base_bdevs_list": [ 00:42:24.246 { 00:42:24.246 "name": null, 00:42:24.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:24.246 "is_configured": false, 00:42:24.246 "data_offset": 0, 00:42:24.246 "data_size": 7936 00:42:24.246 }, 00:42:24.246 { 00:42:24.246 "name": "BaseBdev2", 00:42:24.246 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:24.246 "is_configured": true, 00:42:24.246 "data_offset": 256, 00:42:24.246 "data_size": 7936 00:42:24.246 } 00:42:24.246 ] 00:42:24.246 }' 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:24.246 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:24.506 "name": "raid_bdev1", 00:42:24.506 "uuid": "f3decd68-5c3e-4a82-bc27-c85cfe3cedc2", 00:42:24.506 "strip_size_kb": 0, 00:42:24.506 "state": "online", 00:42:24.506 "raid_level": "raid1", 00:42:24.506 "superblock": true, 00:42:24.506 "num_base_bdevs": 2, 00:42:24.506 "num_base_bdevs_discovered": 1, 00:42:24.506 "num_base_bdevs_operational": 1, 00:42:24.506 "base_bdevs_list": [ 00:42:24.506 { 00:42:24.506 "name": null, 00:42:24.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:24.506 "is_configured": false, 00:42:24.506 "data_offset": 0, 00:42:24.506 "data_size": 7936 00:42:24.506 }, 00:42:24.506 { 00:42:24.506 "name": "BaseBdev2", 00:42:24.506 "uuid": "ba0856e3-57d2-539f-a671-4c5d22b00ebf", 00:42:24.506 "is_configured": true, 00:42:24.506 "data_offset": 256, 00:42:24.506 "data_size": 7936 00:42:24.506 } 00:42:24.506 ] 00:42:24.506 }' 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:24.506 23:24:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86381 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86381 ']' 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86381 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86381 00:42:24.506 killing process with pid 86381 00:42:24.506 Received shutdown signal, test time was about 60.000000 seconds 00:42:24.506 00:42:24.506 Latency(us) 00:42:24.506 [2024-12-09T23:24:05.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:24.506 [2024-12-09T23:24:05.142Z] =================================================================================================================== 00:42:24.506 [2024-12-09T23:24:05.142Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86381' 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86381 00:42:24.506 [2024-12-09 23:24:05.068846] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:24.506 23:24:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86381 00:42:24.506 [2024-12-09 23:24:05.068978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:24.506 [2024-12-09 23:24:05.069031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:24.506 [2024-12-09 23:24:05.069051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:42:24.764 [2024-12-09 23:24:05.376345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:26.172 23:24:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:42:26.172 00:42:26.172 real 0m20.102s 00:42:26.172 user 0m25.791s 00:42:26.172 sys 0m3.193s 00:42:26.172 23:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:26.172 ************************************ 00:42:26.172 END TEST raid_rebuild_test_sb_4k 00:42:26.172 ************************************ 00:42:26.172 23:24:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:26.172 23:24:06 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:42:26.172 23:24:06 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:42:26.172 23:24:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:42:26.172 23:24:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:26.172 23:24:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:26.172 ************************************ 00:42:26.172 START TEST raid_state_function_test_sb_md_separate 00:42:26.172 ************************************ 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:42:26.172 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87075 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87075' 00:42:26.173 Process raid pid: 87075 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87075 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87075 ']' 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:26.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:26.173 23:24:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:26.173 [2024-12-09 23:24:06.685860] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:42:26.173 [2024-12-09 23:24:06.686178] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:26.432 [2024-12-09 23:24:06.867570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:26.432 [2024-12-09 23:24:06.990127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:26.691 [2024-12-09 23:24:07.205199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:26.691 [2024-12-09 23:24:07.205241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:26.950 [2024-12-09 23:24:07.521577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:42:26.950 [2024-12-09 23:24:07.521758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:42:26.950 [2024-12-09 23:24:07.521907] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:26.950 [2024-12-09 23:24:07.521955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:26.950 "name": "Existed_Raid", 00:42:26.950 "uuid": "6fcf9bd8-b643-48eb-bb31-7549f9c17475", 00:42:26.950 "strip_size_kb": 0, 00:42:26.950 "state": "configuring", 00:42:26.950 "raid_level": "raid1", 00:42:26.950 "superblock": true, 00:42:26.950 "num_base_bdevs": 2, 00:42:26.950 "num_base_bdevs_discovered": 0, 00:42:26.950 "num_base_bdevs_operational": 2, 00:42:26.950 "base_bdevs_list": [ 00:42:26.950 { 00:42:26.950 "name": "BaseBdev1", 00:42:26.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:26.950 "is_configured": false, 00:42:26.950 "data_offset": 0, 00:42:26.950 "data_size": 0 00:42:26.950 }, 00:42:26.950 { 00:42:26.950 "name": "BaseBdev2", 00:42:26.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:26.950 "is_configured": false, 00:42:26.950 "data_offset": 0, 00:42:26.950 "data_size": 0 00:42:26.950 } 00:42:26.950 ] 00:42:26.950 }' 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:26.950 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:27.516 [2024-12-09 23:24:07.965573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:42:27.516 [2024-12-09 23:24:07.965611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:27.516 [2024-12-09 23:24:07.973584] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:42:27.516 [2024-12-09 23:24:07.973756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:42:27.516 [2024-12-09 23:24:07.973865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:27.516 [2024-12-09 23:24:07.973928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.516 23:24:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:27.516 [2024-12-09 23:24:08.024862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:27.516 BaseBdev1 00:42:27.516 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.516 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:42:27.516 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:42:27.516 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:27.517 [ 00:42:27.517 { 00:42:27.517 "name": "BaseBdev1", 00:42:27.517 "aliases": [ 00:42:27.517 "4424027a-0800-4803-bc1a-b53314d647b9" 00:42:27.517 ], 00:42:27.517 "product_name": "Malloc disk", 00:42:27.517 "block_size": 4096, 00:42:27.517 "num_blocks": 8192, 00:42:27.517 "uuid": "4424027a-0800-4803-bc1a-b53314d647b9", 00:42:27.517 "md_size": 32, 00:42:27.517 "md_interleave": false, 00:42:27.517 "dif_type": 0, 00:42:27.517 "assigned_rate_limits": { 00:42:27.517 "rw_ios_per_sec": 0, 00:42:27.517 "rw_mbytes_per_sec": 0, 00:42:27.517 "r_mbytes_per_sec": 0, 00:42:27.517 "w_mbytes_per_sec": 0 00:42:27.517 }, 00:42:27.517 "claimed": true, 00:42:27.517 "claim_type": "exclusive_write", 00:42:27.517 "zoned": false, 00:42:27.517 "supported_io_types": { 00:42:27.517 "read": true, 00:42:27.517 "write": true, 00:42:27.517 "unmap": true, 00:42:27.517 "flush": true, 00:42:27.517 "reset": true, 00:42:27.517 "nvme_admin": false, 00:42:27.517 "nvme_io": false, 00:42:27.517 "nvme_io_md": false, 00:42:27.517 "write_zeroes": true, 00:42:27.517 "zcopy": true, 00:42:27.517 "get_zone_info": false, 00:42:27.517 "zone_management": false, 00:42:27.517 "zone_append": false, 00:42:27.517 "compare": false, 00:42:27.517 "compare_and_write": false, 00:42:27.517 "abort": true, 00:42:27.517 "seek_hole": false, 00:42:27.517 "seek_data": false, 00:42:27.517 "copy": true, 00:42:27.517 "nvme_iov_md": false 00:42:27.517 }, 00:42:27.517 "memory_domains": [ 00:42:27.517 { 00:42:27.517 "dma_device_id": "system", 00:42:27.517 "dma_device_type": 1 00:42:27.517 }, 00:42:27.517 { 00:42:27.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:27.517 "dma_device_type": 2 00:42:27.517 } 00:42:27.517 ], 00:42:27.517 "driver_specific": {} 00:42:27.517 } 00:42:27.517 ] 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:27.517 "name": "Existed_Raid", 00:42:27.517 "uuid": "fbf03f2f-d7f2-44ab-afd6-0391afe87952", 00:42:27.517 "strip_size_kb": 0, 00:42:27.517 "state": "configuring", 00:42:27.517 "raid_level": "raid1", 00:42:27.517 "superblock": true, 00:42:27.517 "num_base_bdevs": 2, 00:42:27.517 "num_base_bdevs_discovered": 1, 00:42:27.517 "num_base_bdevs_operational": 2, 00:42:27.517 "base_bdevs_list": [ 00:42:27.517 { 00:42:27.517 "name": "BaseBdev1", 00:42:27.517 "uuid": "4424027a-0800-4803-bc1a-b53314d647b9", 00:42:27.517 "is_configured": true, 00:42:27.517 "data_offset": 256, 00:42:27.517 "data_size": 7936 00:42:27.517 }, 00:42:27.517 { 00:42:27.517 "name": "BaseBdev2", 00:42:27.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:27.517 "is_configured": false, 00:42:27.517 "data_offset": 0, 00:42:27.517 "data_size": 0 00:42:27.517 } 00:42:27.517 ] 00:42:27.517 }' 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:27.517 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.082 [2024-12-09 23:24:08.428535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:42:28.082 [2024-12-09 23:24:08.428739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.082 [2024-12-09 23:24:08.436588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:28.082 [2024-12-09 23:24:08.438970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:28.082 [2024-12-09 23:24:08.439143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:28.082 "name": "Existed_Raid", 00:42:28.082 "uuid": "b4549c1d-ff7c-4eee-bc84-1f6b3e18ed8b", 00:42:28.082 "strip_size_kb": 0, 00:42:28.082 "state": "configuring", 00:42:28.082 "raid_level": "raid1", 00:42:28.082 "superblock": true, 00:42:28.082 "num_base_bdevs": 2, 00:42:28.082 "num_base_bdevs_discovered": 1, 00:42:28.082 "num_base_bdevs_operational": 2, 00:42:28.082 "base_bdevs_list": [ 00:42:28.082 { 00:42:28.082 "name": "BaseBdev1", 00:42:28.082 "uuid": "4424027a-0800-4803-bc1a-b53314d647b9", 00:42:28.082 "is_configured": true, 00:42:28.082 "data_offset": 256, 00:42:28.082 "data_size": 7936 00:42:28.082 }, 00:42:28.082 { 00:42:28.082 "name": "BaseBdev2", 00:42:28.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:28.082 "is_configured": false, 00:42:28.082 "data_offset": 0, 00:42:28.082 "data_size": 0 00:42:28.082 } 00:42:28.082 ] 00:42:28.082 }' 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:28.082 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.341 [2024-12-09 23:24:08.855301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:28.341 BaseBdev2 00:42:28.341 [2024-12-09 23:24:08.855759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:42:28.341 [2024-12-09 23:24:08.855784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:28.341 [2024-12-09 23:24:08.855871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:42:28.341 [2024-12-09 23:24:08.856012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:42:28.341 [2024-12-09 23:24:08.856026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:42:28.341 [2024-12-09 23:24:08.856116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.341 [ 00:42:28.341 { 00:42:28.341 "name": "BaseBdev2", 00:42:28.341 "aliases": [ 00:42:28.341 "9ad54900-3b13-4986-a30e-534d0f27c870" 00:42:28.341 ], 00:42:28.341 "product_name": "Malloc disk", 00:42:28.341 "block_size": 4096, 00:42:28.341 "num_blocks": 8192, 00:42:28.341 "uuid": "9ad54900-3b13-4986-a30e-534d0f27c870", 00:42:28.341 "md_size": 32, 00:42:28.341 "md_interleave": false, 00:42:28.341 "dif_type": 0, 00:42:28.341 "assigned_rate_limits": { 00:42:28.341 "rw_ios_per_sec": 0, 00:42:28.341 "rw_mbytes_per_sec": 0, 00:42:28.341 "r_mbytes_per_sec": 0, 00:42:28.341 "w_mbytes_per_sec": 0 00:42:28.341 }, 00:42:28.341 "claimed": true, 00:42:28.341 "claim_type": "exclusive_write", 00:42:28.341 "zoned": false, 00:42:28.341 "supported_io_types": { 00:42:28.341 "read": true, 00:42:28.341 "write": true, 00:42:28.341 "unmap": true, 00:42:28.341 "flush": true, 00:42:28.341 "reset": true, 00:42:28.341 "nvme_admin": false, 00:42:28.341 "nvme_io": false, 00:42:28.341 "nvme_io_md": false, 00:42:28.341 "write_zeroes": true, 00:42:28.341 "zcopy": true, 00:42:28.341 "get_zone_info": false, 00:42:28.341 "zone_management": false, 00:42:28.341 "zone_append": false, 00:42:28.341 "compare": false, 00:42:28.341 "compare_and_write": false, 00:42:28.341 "abort": true, 00:42:28.341 "seek_hole": false, 00:42:28.341 "seek_data": false, 00:42:28.341 "copy": true, 00:42:28.341 "nvme_iov_md": false 00:42:28.341 }, 00:42:28.341 "memory_domains": [ 00:42:28.341 { 00:42:28.341 "dma_device_id": "system", 00:42:28.341 "dma_device_type": 1 00:42:28.341 }, 00:42:28.341 { 00:42:28.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:28.341 "dma_device_type": 2 00:42:28.341 } 00:42:28.341 ], 00:42:28.341 "driver_specific": {} 00:42:28.341 } 00:42:28.341 ] 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:28.341 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:28.342 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.342 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.342 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.342 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:28.342 "name": "Existed_Raid", 00:42:28.342 "uuid": "b4549c1d-ff7c-4eee-bc84-1f6b3e18ed8b", 00:42:28.342 "strip_size_kb": 0, 00:42:28.342 "state": "online", 00:42:28.342 "raid_level": "raid1", 00:42:28.342 "superblock": true, 00:42:28.342 "num_base_bdevs": 2, 00:42:28.342 "num_base_bdevs_discovered": 2, 00:42:28.342 "num_base_bdevs_operational": 2, 00:42:28.342 "base_bdevs_list": [ 00:42:28.342 { 00:42:28.342 "name": "BaseBdev1", 00:42:28.342 "uuid": "4424027a-0800-4803-bc1a-b53314d647b9", 00:42:28.342 "is_configured": true, 00:42:28.342 "data_offset": 256, 00:42:28.342 "data_size": 7936 00:42:28.342 }, 00:42:28.342 { 00:42:28.342 "name": "BaseBdev2", 00:42:28.342 "uuid": "9ad54900-3b13-4986-a30e-534d0f27c870", 00:42:28.342 "is_configured": true, 00:42:28.342 "data_offset": 256, 00:42:28.342 "data_size": 7936 00:42:28.342 } 00:42:28.342 ] 00:42:28.342 }' 00:42:28.342 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:28.342 23:24:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.910 [2024-12-09 23:24:09.338989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.910 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:28.910 "name": "Existed_Raid", 00:42:28.910 "aliases": [ 00:42:28.910 "b4549c1d-ff7c-4eee-bc84-1f6b3e18ed8b" 00:42:28.910 ], 00:42:28.910 "product_name": "Raid Volume", 00:42:28.911 "block_size": 4096, 00:42:28.911 "num_blocks": 7936, 00:42:28.911 "uuid": "b4549c1d-ff7c-4eee-bc84-1f6b3e18ed8b", 00:42:28.911 "md_size": 32, 00:42:28.911 "md_interleave": false, 00:42:28.911 "dif_type": 0, 00:42:28.911 "assigned_rate_limits": { 00:42:28.911 "rw_ios_per_sec": 0, 00:42:28.911 "rw_mbytes_per_sec": 0, 00:42:28.911 "r_mbytes_per_sec": 0, 00:42:28.911 "w_mbytes_per_sec": 0 00:42:28.911 }, 00:42:28.911 "claimed": false, 00:42:28.911 "zoned": false, 00:42:28.911 "supported_io_types": { 00:42:28.911 "read": true, 00:42:28.911 "write": true, 00:42:28.911 "unmap": false, 00:42:28.911 "flush": false, 00:42:28.911 "reset": true, 00:42:28.911 "nvme_admin": false, 00:42:28.911 "nvme_io": false, 00:42:28.911 "nvme_io_md": false, 00:42:28.911 "write_zeroes": true, 00:42:28.911 "zcopy": false, 00:42:28.911 "get_zone_info": false, 00:42:28.911 "zone_management": false, 00:42:28.911 "zone_append": false, 00:42:28.911 "compare": false, 00:42:28.911 "compare_and_write": false, 00:42:28.911 "abort": false, 00:42:28.911 "seek_hole": false, 00:42:28.911 "seek_data": false, 00:42:28.911 "copy": false, 00:42:28.911 "nvme_iov_md": false 00:42:28.911 }, 00:42:28.911 "memory_domains": [ 00:42:28.911 { 00:42:28.911 "dma_device_id": "system", 00:42:28.911 "dma_device_type": 1 00:42:28.911 }, 00:42:28.911 { 00:42:28.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:28.911 "dma_device_type": 2 00:42:28.911 }, 00:42:28.911 { 00:42:28.911 "dma_device_id": "system", 00:42:28.911 "dma_device_type": 1 00:42:28.911 }, 00:42:28.911 { 00:42:28.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:28.911 "dma_device_type": 2 00:42:28.911 } 00:42:28.911 ], 00:42:28.911 "driver_specific": { 00:42:28.911 "raid": { 00:42:28.911 "uuid": "b4549c1d-ff7c-4eee-bc84-1f6b3e18ed8b", 00:42:28.911 "strip_size_kb": 0, 00:42:28.911 "state": "online", 00:42:28.911 "raid_level": "raid1", 00:42:28.911 "superblock": true, 00:42:28.911 "num_base_bdevs": 2, 00:42:28.911 "num_base_bdevs_discovered": 2, 00:42:28.911 "num_base_bdevs_operational": 2, 00:42:28.911 "base_bdevs_list": [ 00:42:28.911 { 00:42:28.911 "name": "BaseBdev1", 00:42:28.911 "uuid": "4424027a-0800-4803-bc1a-b53314d647b9", 00:42:28.911 "is_configured": true, 00:42:28.911 "data_offset": 256, 00:42:28.911 "data_size": 7936 00:42:28.911 }, 00:42:28.911 { 00:42:28.911 "name": "BaseBdev2", 00:42:28.911 "uuid": "9ad54900-3b13-4986-a30e-534d0f27c870", 00:42:28.911 "is_configured": true, 00:42:28.911 "data_offset": 256, 00:42:28.911 "data_size": 7936 00:42:28.911 } 00:42:28.911 ] 00:42:28.911 } 00:42:28.911 } 00:42:28.911 }' 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:42:28.911 BaseBdev2' 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.911 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:29.169 [2024-12-09 23:24:09.570564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:29.169 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:29.170 "name": "Existed_Raid", 00:42:29.170 "uuid": "b4549c1d-ff7c-4eee-bc84-1f6b3e18ed8b", 00:42:29.170 "strip_size_kb": 0, 00:42:29.170 "state": "online", 00:42:29.170 "raid_level": "raid1", 00:42:29.170 "superblock": true, 00:42:29.170 "num_base_bdevs": 2, 00:42:29.170 "num_base_bdevs_discovered": 1, 00:42:29.170 "num_base_bdevs_operational": 1, 00:42:29.170 "base_bdevs_list": [ 00:42:29.170 { 00:42:29.170 "name": null, 00:42:29.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:29.170 "is_configured": false, 00:42:29.170 "data_offset": 0, 00:42:29.170 "data_size": 7936 00:42:29.170 }, 00:42:29.170 { 00:42:29.170 "name": "BaseBdev2", 00:42:29.170 "uuid": "9ad54900-3b13-4986-a30e-534d0f27c870", 00:42:29.170 "is_configured": true, 00:42:29.170 "data_offset": 256, 00:42:29.170 "data_size": 7936 00:42:29.170 } 00:42:29.170 ] 00:42:29.170 }' 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:29.170 23:24:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:29.740 [2024-12-09 23:24:10.141568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:42:29.740 [2024-12-09 23:24:10.141812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:29.740 [2024-12-09 23:24:10.247011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:29.740 [2024-12-09 23:24:10.247262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:29.740 [2024-12-09 23:24:10.247471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87075 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87075 ']' 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87075 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87075 00:42:29.740 killing process with pid 87075 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87075' 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87075 00:42:29.740 [2024-12-09 23:24:10.333612] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:29.740 23:24:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87075 00:42:29.740 [2024-12-09 23:24:10.349935] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:31.122 23:24:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:42:31.122 00:42:31.122 real 0m4.908s 00:42:31.122 user 0m6.961s 00:42:31.122 sys 0m0.899s 00:42:31.122 23:24:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:31.122 23:24:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:31.122 ************************************ 00:42:31.122 END TEST raid_state_function_test_sb_md_separate 00:42:31.122 ************************************ 00:42:31.122 23:24:11 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:42:31.122 23:24:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:31.122 23:24:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:31.122 23:24:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:31.122 ************************************ 00:42:31.122 START TEST raid_superblock_test_md_separate 00:42:31.122 ************************************ 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87316 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87316 00:42:31.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87316 ']' 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:31.122 23:24:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:31.122 [2024-12-09 23:24:11.667191] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:42:31.122 [2024-12-09 23:24:11.667317] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87316 ] 00:42:31.381 [2024-12-09 23:24:11.839168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.381 [2024-12-09 23:24:11.958519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:31.641 [2024-12-09 23:24:12.165100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:31.641 [2024-12-09 23:24:12.165164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:31.899 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:31.899 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:42:31.899 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:42:31.899 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:42:31.899 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:42:31.899 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:42:31.899 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:42:31.899 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:42:31.899 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:42:31.899 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:42:31.900 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:42:31.900 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.900 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.159 malloc1 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.159 [2024-12-09 23:24:12.573317] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:32.159 [2024-12-09 23:24:12.573541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:32.159 [2024-12-09 23:24:12.573607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:42:32.159 [2024-12-09 23:24:12.573706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:32.159 [2024-12-09 23:24:12.576109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:32.159 [2024-12-09 23:24:12.576253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:32.159 pt1 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.159 malloc2 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.159 [2024-12-09 23:24:12.627128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:32.159 [2024-12-09 23:24:12.627325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:32.159 [2024-12-09 23:24:12.627389] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:42:32.159 [2024-12-09 23:24:12.627485] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:32.159 [2024-12-09 23:24:12.629838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:32.159 [2024-12-09 23:24:12.629972] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:32.159 pt2 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.159 [2024-12-09 23:24:12.639156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:32.159 [2024-12-09 23:24:12.641341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:32.159 [2024-12-09 23:24:12.641656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:42:32.159 [2024-12-09 23:24:12.641749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:32.159 [2024-12-09 23:24:12.641875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:42:32.159 [2024-12-09 23:24:12.642101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:42:32.159 [2024-12-09 23:24:12.642195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:42:32.159 [2024-12-09 23:24:12.642429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:32.159 "name": "raid_bdev1", 00:42:32.159 "uuid": "8fe05b93-48e2-447d-a419-72e54da360d0", 00:42:32.159 "strip_size_kb": 0, 00:42:32.159 "state": "online", 00:42:32.159 "raid_level": "raid1", 00:42:32.159 "superblock": true, 00:42:32.159 "num_base_bdevs": 2, 00:42:32.159 "num_base_bdevs_discovered": 2, 00:42:32.159 "num_base_bdevs_operational": 2, 00:42:32.159 "base_bdevs_list": [ 00:42:32.159 { 00:42:32.159 "name": "pt1", 00:42:32.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:32.159 "is_configured": true, 00:42:32.159 "data_offset": 256, 00:42:32.159 "data_size": 7936 00:42:32.159 }, 00:42:32.159 { 00:42:32.159 "name": "pt2", 00:42:32.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:32.159 "is_configured": true, 00:42:32.159 "data_offset": 256, 00:42:32.159 "data_size": 7936 00:42:32.159 } 00:42:32.159 ] 00:42:32.159 }' 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:32.159 23:24:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:42:32.728 [2024-12-09 23:24:13.094844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:32.728 "name": "raid_bdev1", 00:42:32.728 "aliases": [ 00:42:32.728 "8fe05b93-48e2-447d-a419-72e54da360d0" 00:42:32.728 ], 00:42:32.728 "product_name": "Raid Volume", 00:42:32.728 "block_size": 4096, 00:42:32.728 "num_blocks": 7936, 00:42:32.728 "uuid": "8fe05b93-48e2-447d-a419-72e54da360d0", 00:42:32.728 "md_size": 32, 00:42:32.728 "md_interleave": false, 00:42:32.728 "dif_type": 0, 00:42:32.728 "assigned_rate_limits": { 00:42:32.728 "rw_ios_per_sec": 0, 00:42:32.728 "rw_mbytes_per_sec": 0, 00:42:32.728 "r_mbytes_per_sec": 0, 00:42:32.728 "w_mbytes_per_sec": 0 00:42:32.728 }, 00:42:32.728 "claimed": false, 00:42:32.728 "zoned": false, 00:42:32.728 "supported_io_types": { 00:42:32.728 "read": true, 00:42:32.728 "write": true, 00:42:32.728 "unmap": false, 00:42:32.728 "flush": false, 00:42:32.728 "reset": true, 00:42:32.728 "nvme_admin": false, 00:42:32.728 "nvme_io": false, 00:42:32.728 "nvme_io_md": false, 00:42:32.728 "write_zeroes": true, 00:42:32.728 "zcopy": false, 00:42:32.728 "get_zone_info": false, 00:42:32.728 "zone_management": false, 00:42:32.728 "zone_append": false, 00:42:32.728 "compare": false, 00:42:32.728 "compare_and_write": false, 00:42:32.728 "abort": false, 00:42:32.728 "seek_hole": false, 00:42:32.728 "seek_data": false, 00:42:32.728 "copy": false, 00:42:32.728 "nvme_iov_md": false 00:42:32.728 }, 00:42:32.728 "memory_domains": [ 00:42:32.728 { 00:42:32.728 "dma_device_id": "system", 00:42:32.728 "dma_device_type": 1 00:42:32.728 }, 00:42:32.728 { 00:42:32.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:32.728 "dma_device_type": 2 00:42:32.728 }, 00:42:32.728 { 00:42:32.728 "dma_device_id": "system", 00:42:32.728 "dma_device_type": 1 00:42:32.728 }, 00:42:32.728 { 00:42:32.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:32.728 "dma_device_type": 2 00:42:32.728 } 00:42:32.728 ], 00:42:32.728 "driver_specific": { 00:42:32.728 "raid": { 00:42:32.728 "uuid": "8fe05b93-48e2-447d-a419-72e54da360d0", 00:42:32.728 "strip_size_kb": 0, 00:42:32.728 "state": "online", 00:42:32.728 "raid_level": "raid1", 00:42:32.728 "superblock": true, 00:42:32.728 "num_base_bdevs": 2, 00:42:32.728 "num_base_bdevs_discovered": 2, 00:42:32.728 "num_base_bdevs_operational": 2, 00:42:32.728 "base_bdevs_list": [ 00:42:32.728 { 00:42:32.728 "name": "pt1", 00:42:32.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:32.728 "is_configured": true, 00:42:32.728 "data_offset": 256, 00:42:32.728 "data_size": 7936 00:42:32.728 }, 00:42:32.728 { 00:42:32.728 "name": "pt2", 00:42:32.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:32.728 "is_configured": true, 00:42:32.728 "data_offset": 256, 00:42:32.728 "data_size": 7936 00:42:32.728 } 00:42:32.728 ] 00:42:32.728 } 00:42:32.728 } 00:42:32.728 }' 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:42:32.728 pt2' 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:42:32.728 [2024-12-09 23:24:13.330559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:32.728 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8fe05b93-48e2-447d-a419-72e54da360d0 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 8fe05b93-48e2-447d-a419-72e54da360d0 ']' 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.989 [2024-12-09 23:24:13.374210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:32.989 [2024-12-09 23:24:13.374324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:32.989 [2024-12-09 23:24:13.374483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:32.989 [2024-12-09 23:24:13.374575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:32.989 [2024-12-09 23:24:13.374602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.989 [2024-12-09 23:24:13.514025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:42:32.989 [2024-12-09 23:24:13.516160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:42:32.989 [2024-12-09 23:24:13.516227] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:42:32.989 [2024-12-09 23:24:13.516286] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:42:32.989 [2024-12-09 23:24:13.516304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:32.989 [2024-12-09 23:24:13.516316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:42:32.989 request: 00:42:32.989 { 00:42:32.989 "name": "raid_bdev1", 00:42:32.989 "raid_level": "raid1", 00:42:32.989 "base_bdevs": [ 00:42:32.989 "malloc1", 00:42:32.989 "malloc2" 00:42:32.989 ], 00:42:32.989 "superblock": false, 00:42:32.989 "method": "bdev_raid_create", 00:42:32.989 "req_id": 1 00:42:32.989 } 00:42:32.989 Got JSON-RPC error response 00:42:32.989 response: 00:42:32.989 { 00:42:32.989 "code": -17, 00:42:32.989 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:42:32.989 } 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:42:32.989 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.990 [2024-12-09 23:24:13.585906] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:32.990 [2024-12-09 23:24:13.586067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:32.990 [2024-12-09 23:24:13.586178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:42:32.990 [2024-12-09 23:24:13.586264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:32.990 [2024-12-09 23:24:13.588480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:32.990 [2024-12-09 23:24:13.588611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:32.990 [2024-12-09 23:24:13.588772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:42:32.990 [2024-12-09 23:24:13.588841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:32.990 pt1 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:32.990 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.249 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:33.249 "name": "raid_bdev1", 00:42:33.249 "uuid": "8fe05b93-48e2-447d-a419-72e54da360d0", 00:42:33.249 "strip_size_kb": 0, 00:42:33.249 "state": "configuring", 00:42:33.249 "raid_level": "raid1", 00:42:33.249 "superblock": true, 00:42:33.249 "num_base_bdevs": 2, 00:42:33.249 "num_base_bdevs_discovered": 1, 00:42:33.249 "num_base_bdevs_operational": 2, 00:42:33.249 "base_bdevs_list": [ 00:42:33.249 { 00:42:33.249 "name": "pt1", 00:42:33.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:33.249 "is_configured": true, 00:42:33.249 "data_offset": 256, 00:42:33.249 "data_size": 7936 00:42:33.249 }, 00:42:33.249 { 00:42:33.249 "name": null, 00:42:33.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:33.249 "is_configured": false, 00:42:33.249 "data_offset": 256, 00:42:33.249 "data_size": 7936 00:42:33.249 } 00:42:33.249 ] 00:42:33.249 }' 00:42:33.249 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:33.249 23:24:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:33.512 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:42:33.512 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:42:33.512 23:24:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:42:33.512 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:33.512 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.512 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:33.512 [2024-12-09 23:24:14.009529] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:33.512 [2024-12-09 23:24:14.009615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:33.512 [2024-12-09 23:24:14.009639] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:42:33.512 [2024-12-09 23:24:14.009655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:33.512 [2024-12-09 23:24:14.009875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:33.512 [2024-12-09 23:24:14.009895] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:33.512 [2024-12-09 23:24:14.009951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:42:33.512 [2024-12-09 23:24:14.009976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:33.512 [2024-12-09 23:24:14.010080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:42:33.513 [2024-12-09 23:24:14.010092] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:33.513 [2024-12-09 23:24:14.010163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:42:33.513 [2024-12-09 23:24:14.010276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:42:33.513 [2024-12-09 23:24:14.010286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:42:33.513 [2024-12-09 23:24:14.010387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:33.513 pt2 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:33.513 "name": "raid_bdev1", 00:42:33.513 "uuid": "8fe05b93-48e2-447d-a419-72e54da360d0", 00:42:33.513 "strip_size_kb": 0, 00:42:33.513 "state": "online", 00:42:33.513 "raid_level": "raid1", 00:42:33.513 "superblock": true, 00:42:33.513 "num_base_bdevs": 2, 00:42:33.513 "num_base_bdevs_discovered": 2, 00:42:33.513 "num_base_bdevs_operational": 2, 00:42:33.513 "base_bdevs_list": [ 00:42:33.513 { 00:42:33.513 "name": "pt1", 00:42:33.513 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:33.513 "is_configured": true, 00:42:33.513 "data_offset": 256, 00:42:33.513 "data_size": 7936 00:42:33.513 }, 00:42:33.513 { 00:42:33.513 "name": "pt2", 00:42:33.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:33.513 "is_configured": true, 00:42:33.513 "data_offset": 256, 00:42:33.513 "data_size": 7936 00:42:33.513 } 00:42:33.513 ] 00:42:33.513 }' 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:33.513 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.083 [2024-12-09 23:24:14.445301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:34.083 "name": "raid_bdev1", 00:42:34.083 "aliases": [ 00:42:34.083 "8fe05b93-48e2-447d-a419-72e54da360d0" 00:42:34.083 ], 00:42:34.083 "product_name": "Raid Volume", 00:42:34.083 "block_size": 4096, 00:42:34.083 "num_blocks": 7936, 00:42:34.083 "uuid": "8fe05b93-48e2-447d-a419-72e54da360d0", 00:42:34.083 "md_size": 32, 00:42:34.083 "md_interleave": false, 00:42:34.083 "dif_type": 0, 00:42:34.083 "assigned_rate_limits": { 00:42:34.083 "rw_ios_per_sec": 0, 00:42:34.083 "rw_mbytes_per_sec": 0, 00:42:34.083 "r_mbytes_per_sec": 0, 00:42:34.083 "w_mbytes_per_sec": 0 00:42:34.083 }, 00:42:34.083 "claimed": false, 00:42:34.083 "zoned": false, 00:42:34.083 "supported_io_types": { 00:42:34.083 "read": true, 00:42:34.083 "write": true, 00:42:34.083 "unmap": false, 00:42:34.083 "flush": false, 00:42:34.083 "reset": true, 00:42:34.083 "nvme_admin": false, 00:42:34.083 "nvme_io": false, 00:42:34.083 "nvme_io_md": false, 00:42:34.083 "write_zeroes": true, 00:42:34.083 "zcopy": false, 00:42:34.083 "get_zone_info": false, 00:42:34.083 "zone_management": false, 00:42:34.083 "zone_append": false, 00:42:34.083 "compare": false, 00:42:34.083 "compare_and_write": false, 00:42:34.083 "abort": false, 00:42:34.083 "seek_hole": false, 00:42:34.083 "seek_data": false, 00:42:34.083 "copy": false, 00:42:34.083 "nvme_iov_md": false 00:42:34.083 }, 00:42:34.083 "memory_domains": [ 00:42:34.083 { 00:42:34.083 "dma_device_id": "system", 00:42:34.083 "dma_device_type": 1 00:42:34.083 }, 00:42:34.083 { 00:42:34.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:34.083 "dma_device_type": 2 00:42:34.083 }, 00:42:34.083 { 00:42:34.083 "dma_device_id": "system", 00:42:34.083 "dma_device_type": 1 00:42:34.083 }, 00:42:34.083 { 00:42:34.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:34.083 "dma_device_type": 2 00:42:34.083 } 00:42:34.083 ], 00:42:34.083 "driver_specific": { 00:42:34.083 "raid": { 00:42:34.083 "uuid": "8fe05b93-48e2-447d-a419-72e54da360d0", 00:42:34.083 "strip_size_kb": 0, 00:42:34.083 "state": "online", 00:42:34.083 "raid_level": "raid1", 00:42:34.083 "superblock": true, 00:42:34.083 "num_base_bdevs": 2, 00:42:34.083 "num_base_bdevs_discovered": 2, 00:42:34.083 "num_base_bdevs_operational": 2, 00:42:34.083 "base_bdevs_list": [ 00:42:34.083 { 00:42:34.083 "name": "pt1", 00:42:34.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:34.083 "is_configured": true, 00:42:34.083 "data_offset": 256, 00:42:34.083 "data_size": 7936 00:42:34.083 }, 00:42:34.083 { 00:42:34.083 "name": "pt2", 00:42:34.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:34.083 "is_configured": true, 00:42:34.083 "data_offset": 256, 00:42:34.083 "data_size": 7936 00:42:34.083 } 00:42:34.083 ] 00:42:34.083 } 00:42:34.083 } 00:42:34.083 }' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:42:34.083 pt2' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.083 [2024-12-09 23:24:14.657002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 8fe05b93-48e2-447d-a419-72e54da360d0 '!=' 8fe05b93-48e2-447d-a419-72e54da360d0 ']' 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.083 [2024-12-09 23:24:14.701055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:34.083 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:34.084 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.084 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.342 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.342 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:34.342 "name": "raid_bdev1", 00:42:34.342 "uuid": "8fe05b93-48e2-447d-a419-72e54da360d0", 00:42:34.342 "strip_size_kb": 0, 00:42:34.342 "state": "online", 00:42:34.342 "raid_level": "raid1", 00:42:34.342 "superblock": true, 00:42:34.342 "num_base_bdevs": 2, 00:42:34.342 "num_base_bdevs_discovered": 1, 00:42:34.342 "num_base_bdevs_operational": 1, 00:42:34.342 "base_bdevs_list": [ 00:42:34.342 { 00:42:34.342 "name": null, 00:42:34.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:34.342 "is_configured": false, 00:42:34.342 "data_offset": 0, 00:42:34.342 "data_size": 7936 00:42:34.342 }, 00:42:34.342 { 00:42:34.342 "name": "pt2", 00:42:34.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:34.342 "is_configured": true, 00:42:34.342 "data_offset": 256, 00:42:34.342 "data_size": 7936 00:42:34.342 } 00:42:34.342 ] 00:42:34.342 }' 00:42:34.342 23:24:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:34.342 23:24:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.602 [2024-12-09 23:24:15.100531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:34.602 [2024-12-09 23:24:15.100670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:34.602 [2024-12-09 23:24:15.100769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:34.602 [2024-12-09 23:24:15.100819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:34.602 [2024-12-09 23:24:15.100833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.602 [2024-12-09 23:24:15.164461] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:34.602 [2024-12-09 23:24:15.164631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:34.602 [2024-12-09 23:24:15.164685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:42:34.602 [2024-12-09 23:24:15.164816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:34.602 [2024-12-09 23:24:15.167099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:34.602 pt2 00:42:34.602 [2024-12-09 23:24:15.167253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:34.602 [2024-12-09 23:24:15.167323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:42:34.602 [2024-12-09 23:24:15.167386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:34.602 [2024-12-09 23:24:15.167504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:42:34.602 [2024-12-09 23:24:15.167520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:34.602 [2024-12-09 23:24:15.167610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:42:34.602 [2024-12-09 23:24:15.167732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:42:34.602 [2024-12-09 23:24:15.167741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:42:34.602 [2024-12-09 23:24:15.167857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:34.602 "name": "raid_bdev1", 00:42:34.602 "uuid": "8fe05b93-48e2-447d-a419-72e54da360d0", 00:42:34.602 "strip_size_kb": 0, 00:42:34.602 "state": "online", 00:42:34.602 "raid_level": "raid1", 00:42:34.602 "superblock": true, 00:42:34.602 "num_base_bdevs": 2, 00:42:34.602 "num_base_bdevs_discovered": 1, 00:42:34.602 "num_base_bdevs_operational": 1, 00:42:34.602 "base_bdevs_list": [ 00:42:34.602 { 00:42:34.602 "name": null, 00:42:34.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:34.602 "is_configured": false, 00:42:34.602 "data_offset": 256, 00:42:34.602 "data_size": 7936 00:42:34.602 }, 00:42:34.602 { 00:42:34.602 "name": "pt2", 00:42:34.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:34.602 "is_configured": true, 00:42:34.602 "data_offset": 256, 00:42:34.602 "data_size": 7936 00:42:34.602 } 00:42:34.602 ] 00:42:34.602 }' 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:34.602 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:35.171 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:35.171 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.171 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:35.171 [2024-12-09 23:24:15.579837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:35.171 [2024-12-09 23:24:15.579979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:35.171 [2024-12-09 23:24:15.580213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:35.171 [2024-12-09 23:24:15.580277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:35.172 [2024-12-09 23:24:15.580290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:35.172 [2024-12-09 23:24:15.639808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:35.172 [2024-12-09 23:24:15.639998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:35.172 [2024-12-09 23:24:15.640064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:42:35.172 [2024-12-09 23:24:15.640146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:35.172 [2024-12-09 23:24:15.642460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:35.172 [2024-12-09 23:24:15.642607] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:35.172 [2024-12-09 23:24:15.642692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:42:35.172 [2024-12-09 23:24:15.642741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:35.172 [2024-12-09 23:24:15.642881] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:42:35.172 [2024-12-09 23:24:15.642893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:35.172 [2024-12-09 23:24:15.642914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:42:35.172 [2024-12-09 23:24:15.642994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:35.172 [2024-12-09 23:24:15.643064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:42:35.172 [2024-12-09 23:24:15.643073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:35.172 [2024-12-09 23:24:15.643144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:42:35.172 [2024-12-09 23:24:15.643243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:42:35.172 [2024-12-09 23:24:15.643254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:42:35.172 [2024-12-09 23:24:15.643358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:35.172 pt1 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:35.172 "name": "raid_bdev1", 00:42:35.172 "uuid": "8fe05b93-48e2-447d-a419-72e54da360d0", 00:42:35.172 "strip_size_kb": 0, 00:42:35.172 "state": "online", 00:42:35.172 "raid_level": "raid1", 00:42:35.172 "superblock": true, 00:42:35.172 "num_base_bdevs": 2, 00:42:35.172 "num_base_bdevs_discovered": 1, 00:42:35.172 "num_base_bdevs_operational": 1, 00:42:35.172 "base_bdevs_list": [ 00:42:35.172 { 00:42:35.172 "name": null, 00:42:35.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:35.172 "is_configured": false, 00:42:35.172 "data_offset": 256, 00:42:35.172 "data_size": 7936 00:42:35.172 }, 00:42:35.172 { 00:42:35.172 "name": "pt2", 00:42:35.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:35.172 "is_configured": true, 00:42:35.172 "data_offset": 256, 00:42:35.172 "data_size": 7936 00:42:35.172 } 00:42:35.172 ] 00:42:35.172 }' 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:35.172 23:24:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:35.740 [2024-12-09 23:24:16.143285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 8fe05b93-48e2-447d-a419-72e54da360d0 '!=' 8fe05b93-48e2-447d-a419-72e54da360d0 ']' 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87316 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87316 ']' 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87316 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87316 00:42:35.740 killing process with pid 87316 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87316' 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87316 00:42:35.740 [2024-12-09 23:24:16.208118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:35.740 [2024-12-09 23:24:16.208216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:35.740 23:24:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87316 00:42:35.740 [2024-12-09 23:24:16.208265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:35.740 [2024-12-09 23:24:16.208286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:42:35.999 [2024-12-09 23:24:16.430808] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:36.934 23:24:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:42:36.934 00:42:36.934 real 0m5.993s 00:42:36.934 user 0m9.052s 00:42:36.934 sys 0m1.161s 00:42:36.934 23:24:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:36.934 23:24:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:36.934 ************************************ 00:42:36.934 END TEST raid_superblock_test_md_separate 00:42:36.934 ************************************ 00:42:37.193 23:24:17 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:42:37.193 23:24:17 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:42:37.193 23:24:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:42:37.193 23:24:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:37.193 23:24:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:37.193 ************************************ 00:42:37.193 START TEST raid_rebuild_test_sb_md_separate 00:42:37.193 ************************************ 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:42:37.193 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:42:37.194 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87640 00:42:37.194 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87640 00:42:37.194 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:42:37.194 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87640 ']' 00:42:37.194 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:37.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:37.194 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:37.194 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:37.194 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:37.194 23:24:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:37.194 [2024-12-09 23:24:17.745447] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:42:37.194 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:37.194 Zero copy mechanism will not be used. 00:42:37.194 [2024-12-09 23:24:17.745569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87640 ] 00:42:37.452 [2024-12-09 23:24:17.918531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:37.452 [2024-12-09 23:24:18.041791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:37.711 [2024-12-09 23:24:18.263124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:37.711 [2024-12-09 23:24:18.263176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:37.970 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:37.970 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:42:37.970 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:37.970 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:42:37.970 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.970 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.230 BaseBdev1_malloc 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.230 [2024-12-09 23:24:18.621825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:38.230 [2024-12-09 23:24:18.621889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:38.230 [2024-12-09 23:24:18.621914] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:42:38.230 [2024-12-09 23:24:18.621928] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:38.230 [2024-12-09 23:24:18.624164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:38.230 [2024-12-09 23:24:18.624208] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:38.230 BaseBdev1 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.230 BaseBdev2_malloc 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.230 [2024-12-09 23:24:18.672111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:42:38.230 [2024-12-09 23:24:18.672176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:38.230 [2024-12-09 23:24:18.672198] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:42:38.230 [2024-12-09 23:24:18.672214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:38.230 [2024-12-09 23:24:18.674306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:38.230 [2024-12-09 23:24:18.674350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:42:38.230 BaseBdev2 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.230 spare_malloc 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.230 spare_delay 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.230 [2024-12-09 23:24:18.754720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:38.230 [2024-12-09 23:24:18.754784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:38.230 [2024-12-09 23:24:18.754808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:42:38.230 [2024-12-09 23:24:18.754822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:38.230 [2024-12-09 23:24:18.756968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:38.230 [2024-12-09 23:24:18.757013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:38.230 spare 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.230 [2024-12-09 23:24:18.766746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:38.230 [2024-12-09 23:24:18.768776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:38.230 [2024-12-09 23:24:18.768957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:42:38.230 [2024-12-09 23:24:18.768973] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:38.230 [2024-12-09 23:24:18.769050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:42:38.230 [2024-12-09 23:24:18.769183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:42:38.230 [2024-12-09 23:24:18.769194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:42:38.230 [2024-12-09 23:24:18.769309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.230 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:38.230 "name": "raid_bdev1", 00:42:38.231 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:38.231 "strip_size_kb": 0, 00:42:38.231 "state": "online", 00:42:38.231 "raid_level": "raid1", 00:42:38.231 "superblock": true, 00:42:38.231 "num_base_bdevs": 2, 00:42:38.231 "num_base_bdevs_discovered": 2, 00:42:38.231 "num_base_bdevs_operational": 2, 00:42:38.231 "base_bdevs_list": [ 00:42:38.231 { 00:42:38.231 "name": "BaseBdev1", 00:42:38.231 "uuid": "7f799116-cb64-5d5b-b057-43107d4cd34c", 00:42:38.231 "is_configured": true, 00:42:38.231 "data_offset": 256, 00:42:38.231 "data_size": 7936 00:42:38.231 }, 00:42:38.231 { 00:42:38.231 "name": "BaseBdev2", 00:42:38.231 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:38.231 "is_configured": true, 00:42:38.231 "data_offset": 256, 00:42:38.231 "data_size": 7936 00:42:38.231 } 00:42:38.231 ] 00:42:38.231 }' 00:42:38.231 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:38.231 23:24:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.797 [2024-12-09 23:24:19.202851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:38.797 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:42:39.056 [2024-12-09 23:24:19.474625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:42:39.056 /dev/nbd0 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:39.056 1+0 records in 00:42:39.056 1+0 records out 00:42:39.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341847 s, 12.0 MB/s 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:42:39.056 23:24:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:42:39.991 7936+0 records in 00:42:39.991 7936+0 records out 00:42:39.991 32505856 bytes (33 MB, 31 MiB) copied, 0.715197 s, 45.5 MB/s 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:39.991 [2024-12-09 23:24:20.483419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:39.991 [2024-12-09 23:24:20.499520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:39.991 "name": "raid_bdev1", 00:42:39.991 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:39.991 "strip_size_kb": 0, 00:42:39.991 "state": "online", 00:42:39.991 "raid_level": "raid1", 00:42:39.991 "superblock": true, 00:42:39.991 "num_base_bdevs": 2, 00:42:39.991 "num_base_bdevs_discovered": 1, 00:42:39.991 "num_base_bdevs_operational": 1, 00:42:39.991 "base_bdevs_list": [ 00:42:39.991 { 00:42:39.991 "name": null, 00:42:39.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:39.991 "is_configured": false, 00:42:39.991 "data_offset": 0, 00:42:39.991 "data_size": 7936 00:42:39.991 }, 00:42:39.991 { 00:42:39.991 "name": "BaseBdev2", 00:42:39.991 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:39.991 "is_configured": true, 00:42:39.991 "data_offset": 256, 00:42:39.991 "data_size": 7936 00:42:39.991 } 00:42:39.991 ] 00:42:39.991 }' 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:39.991 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:40.581 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:40.581 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.581 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:40.581 [2024-12-09 23:24:20.942915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:40.581 [2024-12-09 23:24:20.958637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:42:40.581 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.581 23:24:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:42:40.581 [2024-12-09 23:24:20.960858] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:41.517 23:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:41.517 23:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:41.517 23:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:41.517 23:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:41.517 23:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:41.517 23:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:41.517 23:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:41.517 23:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.517 23:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:41.517 23:24:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.517 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:41.517 "name": "raid_bdev1", 00:42:41.517 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:41.517 "strip_size_kb": 0, 00:42:41.517 "state": "online", 00:42:41.517 "raid_level": "raid1", 00:42:41.517 "superblock": true, 00:42:41.517 "num_base_bdevs": 2, 00:42:41.517 "num_base_bdevs_discovered": 2, 00:42:41.517 "num_base_bdevs_operational": 2, 00:42:41.517 "process": { 00:42:41.517 "type": "rebuild", 00:42:41.517 "target": "spare", 00:42:41.517 "progress": { 00:42:41.517 "blocks": 2560, 00:42:41.517 "percent": 32 00:42:41.517 } 00:42:41.517 }, 00:42:41.517 "base_bdevs_list": [ 00:42:41.517 { 00:42:41.517 "name": "spare", 00:42:41.517 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:41.517 "is_configured": true, 00:42:41.517 "data_offset": 256, 00:42:41.517 "data_size": 7936 00:42:41.517 }, 00:42:41.517 { 00:42:41.517 "name": "BaseBdev2", 00:42:41.517 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:41.517 "is_configured": true, 00:42:41.517 "data_offset": 256, 00:42:41.517 "data_size": 7936 00:42:41.517 } 00:42:41.517 ] 00:42:41.517 }' 00:42:41.517 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:41.517 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:41.517 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:41.517 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:41.517 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:41.517 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.517 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:41.517 [2024-12-09 23:24:22.104792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:41.776 [2024-12-09 23:24:22.166415] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:41.776 [2024-12-09 23:24:22.166506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:41.776 [2024-12-09 23:24:22.166524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:41.776 [2024-12-09 23:24:22.166539] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:41.776 "name": "raid_bdev1", 00:42:41.776 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:41.776 "strip_size_kb": 0, 00:42:41.776 "state": "online", 00:42:41.776 "raid_level": "raid1", 00:42:41.776 "superblock": true, 00:42:41.776 "num_base_bdevs": 2, 00:42:41.776 "num_base_bdevs_discovered": 1, 00:42:41.776 "num_base_bdevs_operational": 1, 00:42:41.776 "base_bdevs_list": [ 00:42:41.776 { 00:42:41.776 "name": null, 00:42:41.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:41.776 "is_configured": false, 00:42:41.776 "data_offset": 0, 00:42:41.776 "data_size": 7936 00:42:41.776 }, 00:42:41.776 { 00:42:41.776 "name": "BaseBdev2", 00:42:41.776 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:41.776 "is_configured": true, 00:42:41.776 "data_offset": 256, 00:42:41.776 "data_size": 7936 00:42:41.776 } 00:42:41.776 ] 00:42:41.776 }' 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:41.776 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.035 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:42.035 "name": "raid_bdev1", 00:42:42.035 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:42.035 "strip_size_kb": 0, 00:42:42.035 "state": "online", 00:42:42.035 "raid_level": "raid1", 00:42:42.035 "superblock": true, 00:42:42.035 "num_base_bdevs": 2, 00:42:42.035 "num_base_bdevs_discovered": 1, 00:42:42.035 "num_base_bdevs_operational": 1, 00:42:42.035 "base_bdevs_list": [ 00:42:42.035 { 00:42:42.035 "name": null, 00:42:42.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:42.035 "is_configured": false, 00:42:42.035 "data_offset": 0, 00:42:42.035 "data_size": 7936 00:42:42.035 }, 00:42:42.035 { 00:42:42.035 "name": "BaseBdev2", 00:42:42.035 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:42.035 "is_configured": true, 00:42:42.035 "data_offset": 256, 00:42:42.035 "data_size": 7936 00:42:42.035 } 00:42:42.035 ] 00:42:42.035 }' 00:42:42.294 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:42.294 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:42.294 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:42.294 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:42.294 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:42.294 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.294 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:42.294 [2024-12-09 23:24:22.766586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:42.294 [2024-12-09 23:24:22.781011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:42:42.294 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.294 23:24:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:42:42.294 [2024-12-09 23:24:22.783145] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:43.229 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:43.229 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:43.229 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:43.229 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:43.229 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:43.229 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:43.230 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:43.230 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.230 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:43.230 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.230 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:43.230 "name": "raid_bdev1", 00:42:43.230 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:43.230 "strip_size_kb": 0, 00:42:43.230 "state": "online", 00:42:43.230 "raid_level": "raid1", 00:42:43.230 "superblock": true, 00:42:43.230 "num_base_bdevs": 2, 00:42:43.230 "num_base_bdevs_discovered": 2, 00:42:43.230 "num_base_bdevs_operational": 2, 00:42:43.230 "process": { 00:42:43.230 "type": "rebuild", 00:42:43.230 "target": "spare", 00:42:43.230 "progress": { 00:42:43.230 "blocks": 2560, 00:42:43.230 "percent": 32 00:42:43.230 } 00:42:43.230 }, 00:42:43.230 "base_bdevs_list": [ 00:42:43.230 { 00:42:43.230 "name": "spare", 00:42:43.230 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:43.230 "is_configured": true, 00:42:43.230 "data_offset": 256, 00:42:43.230 "data_size": 7936 00:42:43.230 }, 00:42:43.230 { 00:42:43.230 "name": "BaseBdev2", 00:42:43.230 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:43.230 "is_configured": true, 00:42:43.230 "data_offset": 256, 00:42:43.230 "data_size": 7936 00:42:43.230 } 00:42:43.230 ] 00:42:43.230 }' 00:42:43.230 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:42:43.489 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=713 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:43.489 "name": "raid_bdev1", 00:42:43.489 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:43.489 "strip_size_kb": 0, 00:42:43.489 "state": "online", 00:42:43.489 "raid_level": "raid1", 00:42:43.489 "superblock": true, 00:42:43.489 "num_base_bdevs": 2, 00:42:43.489 "num_base_bdevs_discovered": 2, 00:42:43.489 "num_base_bdevs_operational": 2, 00:42:43.489 "process": { 00:42:43.489 "type": "rebuild", 00:42:43.489 "target": "spare", 00:42:43.489 "progress": { 00:42:43.489 "blocks": 2816, 00:42:43.489 "percent": 35 00:42:43.489 } 00:42:43.489 }, 00:42:43.489 "base_bdevs_list": [ 00:42:43.489 { 00:42:43.489 "name": "spare", 00:42:43.489 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:43.489 "is_configured": true, 00:42:43.489 "data_offset": 256, 00:42:43.489 "data_size": 7936 00:42:43.489 }, 00:42:43.489 { 00:42:43.489 "name": "BaseBdev2", 00:42:43.489 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:43.489 "is_configured": true, 00:42:43.489 "data_offset": 256, 00:42:43.489 "data_size": 7936 00:42:43.489 } 00:42:43.489 ] 00:42:43.489 }' 00:42:43.489 23:24:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:43.489 23:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:43.489 23:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:43.489 23:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:43.489 23:24:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.863 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:44.863 "name": "raid_bdev1", 00:42:44.863 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:44.863 "strip_size_kb": 0, 00:42:44.863 "state": "online", 00:42:44.863 "raid_level": "raid1", 00:42:44.863 "superblock": true, 00:42:44.863 "num_base_bdevs": 2, 00:42:44.863 "num_base_bdevs_discovered": 2, 00:42:44.863 "num_base_bdevs_operational": 2, 00:42:44.863 "process": { 00:42:44.863 "type": "rebuild", 00:42:44.863 "target": "spare", 00:42:44.863 "progress": { 00:42:44.863 "blocks": 5632, 00:42:44.863 "percent": 70 00:42:44.863 } 00:42:44.863 }, 00:42:44.863 "base_bdevs_list": [ 00:42:44.863 { 00:42:44.863 "name": "spare", 00:42:44.863 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:44.863 "is_configured": true, 00:42:44.863 "data_offset": 256, 00:42:44.863 "data_size": 7936 00:42:44.863 }, 00:42:44.864 { 00:42:44.864 "name": "BaseBdev2", 00:42:44.864 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:44.864 "is_configured": true, 00:42:44.864 "data_offset": 256, 00:42:44.864 "data_size": 7936 00:42:44.864 } 00:42:44.864 ] 00:42:44.864 }' 00:42:44.864 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:44.864 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:44.864 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:44.864 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:44.864 23:24:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:45.429 [2024-12-09 23:24:25.902224] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:45.429 [2024-12-09 23:24:25.902350] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:45.429 [2024-12-09 23:24:25.902557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:45.687 "name": "raid_bdev1", 00:42:45.687 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:45.687 "strip_size_kb": 0, 00:42:45.687 "state": "online", 00:42:45.687 "raid_level": "raid1", 00:42:45.687 "superblock": true, 00:42:45.687 "num_base_bdevs": 2, 00:42:45.687 "num_base_bdevs_discovered": 2, 00:42:45.687 "num_base_bdevs_operational": 2, 00:42:45.687 "base_bdevs_list": [ 00:42:45.687 { 00:42:45.687 "name": "spare", 00:42:45.687 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:45.687 "is_configured": true, 00:42:45.687 "data_offset": 256, 00:42:45.687 "data_size": 7936 00:42:45.687 }, 00:42:45.687 { 00:42:45.687 "name": "BaseBdev2", 00:42:45.687 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:45.687 "is_configured": true, 00:42:45.687 "data_offset": 256, 00:42:45.687 "data_size": 7936 00:42:45.687 } 00:42:45.687 ] 00:42:45.687 }' 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:45.687 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:45.946 "name": "raid_bdev1", 00:42:45.946 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:45.946 "strip_size_kb": 0, 00:42:45.946 "state": "online", 00:42:45.946 "raid_level": "raid1", 00:42:45.946 "superblock": true, 00:42:45.946 "num_base_bdevs": 2, 00:42:45.946 "num_base_bdevs_discovered": 2, 00:42:45.946 "num_base_bdevs_operational": 2, 00:42:45.946 "base_bdevs_list": [ 00:42:45.946 { 00:42:45.946 "name": "spare", 00:42:45.946 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:45.946 "is_configured": true, 00:42:45.946 "data_offset": 256, 00:42:45.946 "data_size": 7936 00:42:45.946 }, 00:42:45.946 { 00:42:45.946 "name": "BaseBdev2", 00:42:45.946 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:45.946 "is_configured": true, 00:42:45.946 "data_offset": 256, 00:42:45.946 "data_size": 7936 00:42:45.946 } 00:42:45.946 ] 00:42:45.946 }' 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:45.946 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:45.947 "name": "raid_bdev1", 00:42:45.947 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:45.947 "strip_size_kb": 0, 00:42:45.947 "state": "online", 00:42:45.947 "raid_level": "raid1", 00:42:45.947 "superblock": true, 00:42:45.947 "num_base_bdevs": 2, 00:42:45.947 "num_base_bdevs_discovered": 2, 00:42:45.947 "num_base_bdevs_operational": 2, 00:42:45.947 "base_bdevs_list": [ 00:42:45.947 { 00:42:45.947 "name": "spare", 00:42:45.947 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:45.947 "is_configured": true, 00:42:45.947 "data_offset": 256, 00:42:45.947 "data_size": 7936 00:42:45.947 }, 00:42:45.947 { 00:42:45.947 "name": "BaseBdev2", 00:42:45.947 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:45.947 "is_configured": true, 00:42:45.947 "data_offset": 256, 00:42:45.947 "data_size": 7936 00:42:45.947 } 00:42:45.947 ] 00:42:45.947 }' 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:45.947 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:46.515 [2024-12-09 23:24:26.867590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:46.515 [2024-12-09 23:24:26.867664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:46.515 [2024-12-09 23:24:26.867795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:46.515 [2024-12-09 23:24:26.867889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:46.515 [2024-12-09 23:24:26.867905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:42:46.515 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:42:46.516 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:42:46.516 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:46.516 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:42:46.516 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:46.516 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:46.516 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:46.516 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:42:46.516 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:46.516 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:46.516 23:24:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:42:46.516 /dev/nbd0 00:42:46.516 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:46.774 1+0 records in 00:42:46.774 1+0 records out 00:42:46.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388759 s, 10.5 MB/s 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:42:46.774 /dev/nbd1 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:46.774 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:47.033 1+0 records in 00:42:47.033 1+0 records out 00:42:47.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500945 s, 8.2 MB/s 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:47.033 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:47.291 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:47.291 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:47.291 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:47.291 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:47.291 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:47.291 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:47.291 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:42:47.291 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:42:47.291 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:47.291 23:24:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:47.549 [2024-12-09 23:24:28.103031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:47.549 [2024-12-09 23:24:28.103110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:47.549 [2024-12-09 23:24:28.103144] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:42:47.549 [2024-12-09 23:24:28.103158] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:47.549 [2024-12-09 23:24:28.105754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:47.549 [2024-12-09 23:24:28.105801] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:47.549 [2024-12-09 23:24:28.105882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:47.549 [2024-12-09 23:24:28.105953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:47.549 [2024-12-09 23:24:28.106117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:47.549 spare 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.549 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:47.807 [2024-12-09 23:24:28.206049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:42:47.807 [2024-12-09 23:24:28.206093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:47.807 [2024-12-09 23:24:28.206211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:42:47.807 [2024-12-09 23:24:28.206390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:42:47.807 [2024-12-09 23:24:28.206401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:42:47.807 [2024-12-09 23:24:28.206577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:47.807 "name": "raid_bdev1", 00:42:47.807 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:47.807 "strip_size_kb": 0, 00:42:47.807 "state": "online", 00:42:47.807 "raid_level": "raid1", 00:42:47.807 "superblock": true, 00:42:47.807 "num_base_bdevs": 2, 00:42:47.807 "num_base_bdevs_discovered": 2, 00:42:47.807 "num_base_bdevs_operational": 2, 00:42:47.807 "base_bdevs_list": [ 00:42:47.807 { 00:42:47.807 "name": "spare", 00:42:47.807 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:47.807 "is_configured": true, 00:42:47.807 "data_offset": 256, 00:42:47.807 "data_size": 7936 00:42:47.807 }, 00:42:47.807 { 00:42:47.807 "name": "BaseBdev2", 00:42:47.807 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:47.807 "is_configured": true, 00:42:47.807 "data_offset": 256, 00:42:47.807 "data_size": 7936 00:42:47.807 } 00:42:47.807 ] 00:42:47.807 }' 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:47.807 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.065 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:48.065 "name": "raid_bdev1", 00:42:48.065 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:48.065 "strip_size_kb": 0, 00:42:48.065 "state": "online", 00:42:48.065 "raid_level": "raid1", 00:42:48.065 "superblock": true, 00:42:48.065 "num_base_bdevs": 2, 00:42:48.065 "num_base_bdevs_discovered": 2, 00:42:48.065 "num_base_bdevs_operational": 2, 00:42:48.065 "base_bdevs_list": [ 00:42:48.065 { 00:42:48.065 "name": "spare", 00:42:48.065 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:48.065 "is_configured": true, 00:42:48.065 "data_offset": 256, 00:42:48.065 "data_size": 7936 00:42:48.065 }, 00:42:48.065 { 00:42:48.065 "name": "BaseBdev2", 00:42:48.066 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:48.066 "is_configured": true, 00:42:48.066 "data_offset": 256, 00:42:48.066 "data_size": 7936 00:42:48.066 } 00:42:48.066 ] 00:42:48.066 }' 00:42:48.066 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:48.324 [2024-12-09 23:24:28.818647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.324 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:48.324 "name": "raid_bdev1", 00:42:48.325 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:48.325 "strip_size_kb": 0, 00:42:48.325 "state": "online", 00:42:48.325 "raid_level": "raid1", 00:42:48.325 "superblock": true, 00:42:48.325 "num_base_bdevs": 2, 00:42:48.325 "num_base_bdevs_discovered": 1, 00:42:48.325 "num_base_bdevs_operational": 1, 00:42:48.325 "base_bdevs_list": [ 00:42:48.325 { 00:42:48.325 "name": null, 00:42:48.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:48.325 "is_configured": false, 00:42:48.325 "data_offset": 0, 00:42:48.325 "data_size": 7936 00:42:48.325 }, 00:42:48.325 { 00:42:48.325 "name": "BaseBdev2", 00:42:48.325 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:48.325 "is_configured": true, 00:42:48.325 "data_offset": 256, 00:42:48.325 "data_size": 7936 00:42:48.325 } 00:42:48.325 ] 00:42:48.325 }' 00:42:48.325 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:48.325 23:24:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:48.583 23:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:48.583 23:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.583 23:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:48.841 [2024-12-09 23:24:29.222626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:48.841 [2024-12-09 23:24:29.222944] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:42:48.841 [2024-12-09 23:24:29.222970] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:48.841 [2024-12-09 23:24:29.223027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:48.841 [2024-12-09 23:24:29.237443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:42:48.841 23:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.841 23:24:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:42:48.841 [2024-12-09 23:24:29.240041] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:49.782 "name": "raid_bdev1", 00:42:49.782 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:49.782 "strip_size_kb": 0, 00:42:49.782 "state": "online", 00:42:49.782 "raid_level": "raid1", 00:42:49.782 "superblock": true, 00:42:49.782 "num_base_bdevs": 2, 00:42:49.782 "num_base_bdevs_discovered": 2, 00:42:49.782 "num_base_bdevs_operational": 2, 00:42:49.782 "process": { 00:42:49.782 "type": "rebuild", 00:42:49.782 "target": "spare", 00:42:49.782 "progress": { 00:42:49.782 "blocks": 2560, 00:42:49.782 "percent": 32 00:42:49.782 } 00:42:49.782 }, 00:42:49.782 "base_bdevs_list": [ 00:42:49.782 { 00:42:49.782 "name": "spare", 00:42:49.782 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:49.782 "is_configured": true, 00:42:49.782 "data_offset": 256, 00:42:49.782 "data_size": 7936 00:42:49.782 }, 00:42:49.782 { 00:42:49.782 "name": "BaseBdev2", 00:42:49.782 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:49.782 "is_configured": true, 00:42:49.782 "data_offset": 256, 00:42:49.782 "data_size": 7936 00:42:49.782 } 00:42:49.782 ] 00:42:49.782 }' 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.782 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:49.782 [2024-12-09 23:24:30.396570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:50.041 [2024-12-09 23:24:30.446538] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:50.041 [2024-12-09 23:24:30.446641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:50.041 [2024-12-09 23:24:30.446659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:50.042 [2024-12-09 23:24:30.446682] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:50.042 "name": "raid_bdev1", 00:42:50.042 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:50.042 "strip_size_kb": 0, 00:42:50.042 "state": "online", 00:42:50.042 "raid_level": "raid1", 00:42:50.042 "superblock": true, 00:42:50.042 "num_base_bdevs": 2, 00:42:50.042 "num_base_bdevs_discovered": 1, 00:42:50.042 "num_base_bdevs_operational": 1, 00:42:50.042 "base_bdevs_list": [ 00:42:50.042 { 00:42:50.042 "name": null, 00:42:50.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:50.042 "is_configured": false, 00:42:50.042 "data_offset": 0, 00:42:50.042 "data_size": 7936 00:42:50.042 }, 00:42:50.042 { 00:42:50.042 "name": "BaseBdev2", 00:42:50.042 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:50.042 "is_configured": true, 00:42:50.042 "data_offset": 256, 00:42:50.042 "data_size": 7936 00:42:50.042 } 00:42:50.042 ] 00:42:50.042 }' 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:50.042 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:50.301 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:50.301 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.301 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:50.301 [2024-12-09 23:24:30.858647] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:50.301 [2024-12-09 23:24:30.858721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:50.301 [2024-12-09 23:24:30.858752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:42:50.301 [2024-12-09 23:24:30.858766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:50.301 [2024-12-09 23:24:30.859062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:50.301 [2024-12-09 23:24:30.859083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:50.301 [2024-12-09 23:24:30.859145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:50.301 [2024-12-09 23:24:30.859161] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:42:50.301 [2024-12-09 23:24:30.859174] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:50.301 [2024-12-09 23:24:30.859201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:50.301 [2024-12-09 23:24:30.873146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:42:50.301 spare 00:42:50.301 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.301 23:24:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:42:50.301 [2024-12-09 23:24:30.875285] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:51.679 "name": "raid_bdev1", 00:42:51.679 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:51.679 "strip_size_kb": 0, 00:42:51.679 "state": "online", 00:42:51.679 "raid_level": "raid1", 00:42:51.679 "superblock": true, 00:42:51.679 "num_base_bdevs": 2, 00:42:51.679 "num_base_bdevs_discovered": 2, 00:42:51.679 "num_base_bdevs_operational": 2, 00:42:51.679 "process": { 00:42:51.679 "type": "rebuild", 00:42:51.679 "target": "spare", 00:42:51.679 "progress": { 00:42:51.679 "blocks": 2560, 00:42:51.679 "percent": 32 00:42:51.679 } 00:42:51.679 }, 00:42:51.679 "base_bdevs_list": [ 00:42:51.679 { 00:42:51.679 "name": "spare", 00:42:51.679 "uuid": "48b3cb01-1a07-5a7f-bf73-4223107d6b67", 00:42:51.679 "is_configured": true, 00:42:51.679 "data_offset": 256, 00:42:51.679 "data_size": 7936 00:42:51.679 }, 00:42:51.679 { 00:42:51.679 "name": "BaseBdev2", 00:42:51.679 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:51.679 "is_configured": true, 00:42:51.679 "data_offset": 256, 00:42:51.679 "data_size": 7936 00:42:51.679 } 00:42:51.679 ] 00:42:51.679 }' 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:51.679 23:24:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:51.679 [2024-12-09 23:24:32.023849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:51.679 [2024-12-09 23:24:32.080828] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:51.679 [2024-12-09 23:24:32.080893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:51.679 [2024-12-09 23:24:32.080913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:51.679 [2024-12-09 23:24:32.080922] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:51.679 "name": "raid_bdev1", 00:42:51.679 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:51.679 "strip_size_kb": 0, 00:42:51.679 "state": "online", 00:42:51.679 "raid_level": "raid1", 00:42:51.679 "superblock": true, 00:42:51.679 "num_base_bdevs": 2, 00:42:51.679 "num_base_bdevs_discovered": 1, 00:42:51.679 "num_base_bdevs_operational": 1, 00:42:51.679 "base_bdevs_list": [ 00:42:51.679 { 00:42:51.679 "name": null, 00:42:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:51.679 "is_configured": false, 00:42:51.679 "data_offset": 0, 00:42:51.679 "data_size": 7936 00:42:51.679 }, 00:42:51.679 { 00:42:51.679 "name": "BaseBdev2", 00:42:51.679 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:51.679 "is_configured": true, 00:42:51.679 "data_offset": 256, 00:42:51.679 "data_size": 7936 00:42:51.679 } 00:42:51.679 ] 00:42:51.679 }' 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:51.679 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:51.938 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:51.938 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:51.938 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:51.938 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:51.938 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:51.938 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:51.938 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:51.938 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.938 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:52.196 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.196 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:52.196 "name": "raid_bdev1", 00:42:52.196 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:52.196 "strip_size_kb": 0, 00:42:52.196 "state": "online", 00:42:52.196 "raid_level": "raid1", 00:42:52.196 "superblock": true, 00:42:52.196 "num_base_bdevs": 2, 00:42:52.196 "num_base_bdevs_discovered": 1, 00:42:52.196 "num_base_bdevs_operational": 1, 00:42:52.196 "base_bdevs_list": [ 00:42:52.196 { 00:42:52.196 "name": null, 00:42:52.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:52.196 "is_configured": false, 00:42:52.196 "data_offset": 0, 00:42:52.196 "data_size": 7936 00:42:52.196 }, 00:42:52.196 { 00:42:52.196 "name": "BaseBdev2", 00:42:52.196 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:52.196 "is_configured": true, 00:42:52.196 "data_offset": 256, 00:42:52.197 "data_size": 7936 00:42:52.197 } 00:42:52.197 ] 00:42:52.197 }' 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:52.197 [2024-12-09 23:24:32.688537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:52.197 [2024-12-09 23:24:32.688602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:52.197 [2024-12-09 23:24:32.688629] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:42:52.197 [2024-12-09 23:24:32.688640] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:52.197 [2024-12-09 23:24:32.688885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:52.197 [2024-12-09 23:24:32.688901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:52.197 [2024-12-09 23:24:32.688961] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:42:52.197 [2024-12-09 23:24:32.688978] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:42:52.197 [2024-12-09 23:24:32.688990] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:52.197 [2024-12-09 23:24:32.689001] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:42:52.197 BaseBdev1 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.197 23:24:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:42:53.133 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:53.133 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:53.133 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:53.134 "name": "raid_bdev1", 00:42:53.134 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:53.134 "strip_size_kb": 0, 00:42:53.134 "state": "online", 00:42:53.134 "raid_level": "raid1", 00:42:53.134 "superblock": true, 00:42:53.134 "num_base_bdevs": 2, 00:42:53.134 "num_base_bdevs_discovered": 1, 00:42:53.134 "num_base_bdevs_operational": 1, 00:42:53.134 "base_bdevs_list": [ 00:42:53.134 { 00:42:53.134 "name": null, 00:42:53.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:53.134 "is_configured": false, 00:42:53.134 "data_offset": 0, 00:42:53.134 "data_size": 7936 00:42:53.134 }, 00:42:53.134 { 00:42:53.134 "name": "BaseBdev2", 00:42:53.134 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:53.134 "is_configured": true, 00:42:53.134 "data_offset": 256, 00:42:53.134 "data_size": 7936 00:42:53.134 } 00:42:53.134 ] 00:42:53.134 }' 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:53.134 23:24:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:53.700 "name": "raid_bdev1", 00:42:53.700 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:53.700 "strip_size_kb": 0, 00:42:53.700 "state": "online", 00:42:53.700 "raid_level": "raid1", 00:42:53.700 "superblock": true, 00:42:53.700 "num_base_bdevs": 2, 00:42:53.700 "num_base_bdevs_discovered": 1, 00:42:53.700 "num_base_bdevs_operational": 1, 00:42:53.700 "base_bdevs_list": [ 00:42:53.700 { 00:42:53.700 "name": null, 00:42:53.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:53.700 "is_configured": false, 00:42:53.700 "data_offset": 0, 00:42:53.700 "data_size": 7936 00:42:53.700 }, 00:42:53.700 { 00:42:53.700 "name": "BaseBdev2", 00:42:53.700 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:53.700 "is_configured": true, 00:42:53.700 "data_offset": 256, 00:42:53.700 "data_size": 7936 00:42:53.700 } 00:42:53.700 ] 00:42:53.700 }' 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:53.700 [2024-12-09 23:24:34.298783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:53.700 [2024-12-09 23:24:34.298979] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:42:53.700 [2024-12-09 23:24:34.299005] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:53.700 request: 00:42:53.700 { 00:42:53.700 "base_bdev": "BaseBdev1", 00:42:53.700 "raid_bdev": "raid_bdev1", 00:42:53.700 "method": "bdev_raid_add_base_bdev", 00:42:53.700 "req_id": 1 00:42:53.700 } 00:42:53.700 Got JSON-RPC error response 00:42:53.700 response: 00:42:53.700 { 00:42:53.700 "code": -22, 00:42:53.700 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:42:53.700 } 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:53.700 23:24:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:55.087 "name": "raid_bdev1", 00:42:55.087 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:55.087 "strip_size_kb": 0, 00:42:55.087 "state": "online", 00:42:55.087 "raid_level": "raid1", 00:42:55.087 "superblock": true, 00:42:55.087 "num_base_bdevs": 2, 00:42:55.087 "num_base_bdevs_discovered": 1, 00:42:55.087 "num_base_bdevs_operational": 1, 00:42:55.087 "base_bdevs_list": [ 00:42:55.087 { 00:42:55.087 "name": null, 00:42:55.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:55.087 "is_configured": false, 00:42:55.087 "data_offset": 0, 00:42:55.087 "data_size": 7936 00:42:55.087 }, 00:42:55.087 { 00:42:55.087 "name": "BaseBdev2", 00:42:55.087 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:55.087 "is_configured": true, 00:42:55.087 "data_offset": 256, 00:42:55.087 "data_size": 7936 00:42:55.087 } 00:42:55.087 ] 00:42:55.087 }' 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:55.087 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:55.345 "name": "raid_bdev1", 00:42:55.345 "uuid": "0a045206-9ae5-4dbb-80b4-66862aa6e2c2", 00:42:55.345 "strip_size_kb": 0, 00:42:55.345 "state": "online", 00:42:55.345 "raid_level": "raid1", 00:42:55.345 "superblock": true, 00:42:55.345 "num_base_bdevs": 2, 00:42:55.345 "num_base_bdevs_discovered": 1, 00:42:55.345 "num_base_bdevs_operational": 1, 00:42:55.345 "base_bdevs_list": [ 00:42:55.345 { 00:42:55.345 "name": null, 00:42:55.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:55.345 "is_configured": false, 00:42:55.345 "data_offset": 0, 00:42:55.345 "data_size": 7936 00:42:55.345 }, 00:42:55.345 { 00:42:55.345 "name": "BaseBdev2", 00:42:55.345 "uuid": "38d63295-4962-5d41-9576-c656b885e591", 00:42:55.345 "is_configured": true, 00:42:55.345 "data_offset": 256, 00:42:55.345 "data_size": 7936 00:42:55.345 } 00:42:55.345 ] 00:42:55.345 }' 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:55.345 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87640 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87640 ']' 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87640 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87640 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87640' 00:42:55.346 killing process with pid 87640 00:42:55.346 Received shutdown signal, test time was about 60.000000 seconds 00:42:55.346 00:42:55.346 Latency(us) 00:42:55.346 [2024-12-09T23:24:35.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:55.346 [2024-12-09T23:24:35.982Z] =================================================================================================================== 00:42:55.346 [2024-12-09T23:24:35.982Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87640 00:42:55.346 [2024-12-09 23:24:35.942450] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:55.346 23:24:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87640 00:42:55.346 [2024-12-09 23:24:35.942596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:55.346 [2024-12-09 23:24:35.942649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:55.346 [2024-12-09 23:24:35.942664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:42:55.913 [2024-12-09 23:24:36.283404] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:56.908 23:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:42:56.908 00:42:56.908 real 0m19.815s 00:42:56.908 user 0m25.593s 00:42:56.908 sys 0m2.978s 00:42:56.908 23:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:56.908 ************************************ 00:42:56.908 END TEST raid_rebuild_test_sb_md_separate 00:42:56.908 ************************************ 00:42:56.908 23:24:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:42:56.908 23:24:37 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:42:56.908 23:24:37 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:42:56.908 23:24:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:42:56.908 23:24:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:56.908 23:24:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:56.908 ************************************ 00:42:56.908 START TEST raid_state_function_test_sb_md_interleaved 00:42:56.908 ************************************ 00:42:56.908 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:42:56.909 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88335 00:42:57.171 Process raid pid: 88335 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88335' 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88335 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88335 ']' 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:57.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:57.171 23:24:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:57.171 [2024-12-09 23:24:37.639198] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:42:57.171 [2024-12-09 23:24:37.639335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:57.429 [2024-12-09 23:24:37.824819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:57.429 [2024-12-09 23:24:37.951034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:57.687 [2024-12-09 23:24:38.178358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:57.687 [2024-12-09 23:24:38.178415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:57.945 [2024-12-09 23:24:38.508427] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:42:57.945 [2024-12-09 23:24:38.508485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:42:57.945 [2024-12-09 23:24:38.508498] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:57.945 [2024-12-09 23:24:38.508512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:57.945 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:57.946 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:57.946 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:57.946 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:57.946 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:57.946 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.946 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:57.946 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.946 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:57.946 "name": "Existed_Raid", 00:42:57.946 "uuid": "5366b89d-1b52-4164-a262-7d243e27a657", 00:42:57.946 "strip_size_kb": 0, 00:42:57.946 "state": "configuring", 00:42:57.946 "raid_level": "raid1", 00:42:57.946 "superblock": true, 00:42:57.946 "num_base_bdevs": 2, 00:42:57.946 "num_base_bdevs_discovered": 0, 00:42:57.946 "num_base_bdevs_operational": 2, 00:42:57.946 "base_bdevs_list": [ 00:42:57.946 { 00:42:57.946 "name": "BaseBdev1", 00:42:57.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:57.946 "is_configured": false, 00:42:57.946 "data_offset": 0, 00:42:57.946 "data_size": 0 00:42:57.946 }, 00:42:57.946 { 00:42:57.946 "name": "BaseBdev2", 00:42:57.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:57.946 "is_configured": false, 00:42:57.946 "data_offset": 0, 00:42:57.946 "data_size": 0 00:42:57.946 } 00:42:57.946 ] 00:42:57.946 }' 00:42:57.946 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:57.946 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:58.511 [2024-12-09 23:24:38.915812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:42:58.511 [2024-12-09 23:24:38.915856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:58.511 [2024-12-09 23:24:38.927751] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:42:58.511 [2024-12-09 23:24:38.927796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:42:58.511 [2024-12-09 23:24:38.927806] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:58.511 [2024-12-09 23:24:38.927822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:58.511 [2024-12-09 23:24:38.979205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:58.511 BaseBdev1 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.511 23:24:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:58.511 [ 00:42:58.511 { 00:42:58.511 "name": "BaseBdev1", 00:42:58.511 "aliases": [ 00:42:58.511 "372965d5-1533-48cf-8dfc-68f68d99de3c" 00:42:58.511 ], 00:42:58.511 "product_name": "Malloc disk", 00:42:58.511 "block_size": 4128, 00:42:58.511 "num_blocks": 8192, 00:42:58.511 "uuid": "372965d5-1533-48cf-8dfc-68f68d99de3c", 00:42:58.511 "md_size": 32, 00:42:58.511 "md_interleave": true, 00:42:58.511 "dif_type": 0, 00:42:58.511 "assigned_rate_limits": { 00:42:58.511 "rw_ios_per_sec": 0, 00:42:58.511 "rw_mbytes_per_sec": 0, 00:42:58.511 "r_mbytes_per_sec": 0, 00:42:58.511 "w_mbytes_per_sec": 0 00:42:58.511 }, 00:42:58.511 "claimed": true, 00:42:58.511 "claim_type": "exclusive_write", 00:42:58.511 "zoned": false, 00:42:58.511 "supported_io_types": { 00:42:58.511 "read": true, 00:42:58.511 "write": true, 00:42:58.511 "unmap": true, 00:42:58.511 "flush": true, 00:42:58.511 "reset": true, 00:42:58.511 "nvme_admin": false, 00:42:58.511 "nvme_io": false, 00:42:58.511 "nvme_io_md": false, 00:42:58.511 "write_zeroes": true, 00:42:58.511 "zcopy": true, 00:42:58.511 "get_zone_info": false, 00:42:58.511 "zone_management": false, 00:42:58.511 "zone_append": false, 00:42:58.511 "compare": false, 00:42:58.511 "compare_and_write": false, 00:42:58.511 "abort": true, 00:42:58.511 "seek_hole": false, 00:42:58.511 "seek_data": false, 00:42:58.511 "copy": true, 00:42:58.511 "nvme_iov_md": false 00:42:58.511 }, 00:42:58.511 "memory_domains": [ 00:42:58.511 { 00:42:58.511 "dma_device_id": "system", 00:42:58.511 "dma_device_type": 1 00:42:58.512 }, 00:42:58.512 { 00:42:58.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:58.512 "dma_device_type": 2 00:42:58.512 } 00:42:58.512 ], 00:42:58.512 "driver_specific": {} 00:42:58.512 } 00:42:58.512 ] 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:58.512 "name": "Existed_Raid", 00:42:58.512 "uuid": "188416f7-1b6e-4204-a5f3-c646a5d532bc", 00:42:58.512 "strip_size_kb": 0, 00:42:58.512 "state": "configuring", 00:42:58.512 "raid_level": "raid1", 00:42:58.512 "superblock": true, 00:42:58.512 "num_base_bdevs": 2, 00:42:58.512 "num_base_bdevs_discovered": 1, 00:42:58.512 "num_base_bdevs_operational": 2, 00:42:58.512 "base_bdevs_list": [ 00:42:58.512 { 00:42:58.512 "name": "BaseBdev1", 00:42:58.512 "uuid": "372965d5-1533-48cf-8dfc-68f68d99de3c", 00:42:58.512 "is_configured": true, 00:42:58.512 "data_offset": 256, 00:42:58.512 "data_size": 7936 00:42:58.512 }, 00:42:58.512 { 00:42:58.512 "name": "BaseBdev2", 00:42:58.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:58.512 "is_configured": false, 00:42:58.512 "data_offset": 0, 00:42:58.512 "data_size": 0 00:42:58.512 } 00:42:58.512 ] 00:42:58.512 }' 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:58.512 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.078 [2024-12-09 23:24:39.446628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:42:59.078 [2024-12-09 23:24:39.446688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.078 [2024-12-09 23:24:39.458680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:59.078 [2024-12-09 23:24:39.460749] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:59.078 [2024-12-09 23:24:39.460799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:59.078 "name": "Existed_Raid", 00:42:59.078 "uuid": "ebde1213-7811-4a7c-a5ca-45417e522929", 00:42:59.078 "strip_size_kb": 0, 00:42:59.078 "state": "configuring", 00:42:59.078 "raid_level": "raid1", 00:42:59.078 "superblock": true, 00:42:59.078 "num_base_bdevs": 2, 00:42:59.078 "num_base_bdevs_discovered": 1, 00:42:59.078 "num_base_bdevs_operational": 2, 00:42:59.078 "base_bdevs_list": [ 00:42:59.078 { 00:42:59.078 "name": "BaseBdev1", 00:42:59.078 "uuid": "372965d5-1533-48cf-8dfc-68f68d99de3c", 00:42:59.078 "is_configured": true, 00:42:59.078 "data_offset": 256, 00:42:59.078 "data_size": 7936 00:42:59.078 }, 00:42:59.078 { 00:42:59.078 "name": "BaseBdev2", 00:42:59.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:59.078 "is_configured": false, 00:42:59.078 "data_offset": 0, 00:42:59.078 "data_size": 0 00:42:59.078 } 00:42:59.078 ] 00:42:59.078 }' 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:59.078 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.336 [2024-12-09 23:24:39.908776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:59.336 [2024-12-09 23:24:39.908993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:42:59.336 [2024-12-09 23:24:39.909009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:42:59.336 [2024-12-09 23:24:39.909092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:42:59.336 [2024-12-09 23:24:39.909167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:42:59.336 [2024-12-09 23:24:39.909187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:42:59.336 [2024-12-09 23:24:39.909248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:59.336 BaseBdev2 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.336 [ 00:42:59.336 { 00:42:59.336 "name": "BaseBdev2", 00:42:59.336 "aliases": [ 00:42:59.336 "3127b88b-ce60-4751-8319-faff2ec1d668" 00:42:59.336 ], 00:42:59.336 "product_name": "Malloc disk", 00:42:59.336 "block_size": 4128, 00:42:59.336 "num_blocks": 8192, 00:42:59.336 "uuid": "3127b88b-ce60-4751-8319-faff2ec1d668", 00:42:59.336 "md_size": 32, 00:42:59.336 "md_interleave": true, 00:42:59.336 "dif_type": 0, 00:42:59.336 "assigned_rate_limits": { 00:42:59.336 "rw_ios_per_sec": 0, 00:42:59.336 "rw_mbytes_per_sec": 0, 00:42:59.336 "r_mbytes_per_sec": 0, 00:42:59.336 "w_mbytes_per_sec": 0 00:42:59.336 }, 00:42:59.336 "claimed": true, 00:42:59.336 "claim_type": "exclusive_write", 00:42:59.336 "zoned": false, 00:42:59.336 "supported_io_types": { 00:42:59.336 "read": true, 00:42:59.336 "write": true, 00:42:59.336 "unmap": true, 00:42:59.336 "flush": true, 00:42:59.336 "reset": true, 00:42:59.336 "nvme_admin": false, 00:42:59.336 "nvme_io": false, 00:42:59.336 "nvme_io_md": false, 00:42:59.336 "write_zeroes": true, 00:42:59.336 "zcopy": true, 00:42:59.336 "get_zone_info": false, 00:42:59.336 "zone_management": false, 00:42:59.336 "zone_append": false, 00:42:59.336 "compare": false, 00:42:59.336 "compare_and_write": false, 00:42:59.336 "abort": true, 00:42:59.336 "seek_hole": false, 00:42:59.336 "seek_data": false, 00:42:59.336 "copy": true, 00:42:59.336 "nvme_iov_md": false 00:42:59.336 }, 00:42:59.336 "memory_domains": [ 00:42:59.336 { 00:42:59.336 "dma_device_id": "system", 00:42:59.336 "dma_device_type": 1 00:42:59.336 }, 00:42:59.336 { 00:42:59.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:59.336 "dma_device_type": 2 00:42:59.336 } 00:42:59.336 ], 00:42:59.336 "driver_specific": {} 00:42:59.336 } 00:42:59.336 ] 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.336 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.595 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.595 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:59.595 "name": "Existed_Raid", 00:42:59.595 "uuid": "ebde1213-7811-4a7c-a5ca-45417e522929", 00:42:59.595 "strip_size_kb": 0, 00:42:59.595 "state": "online", 00:42:59.595 "raid_level": "raid1", 00:42:59.595 "superblock": true, 00:42:59.595 "num_base_bdevs": 2, 00:42:59.595 "num_base_bdevs_discovered": 2, 00:42:59.595 "num_base_bdevs_operational": 2, 00:42:59.595 "base_bdevs_list": [ 00:42:59.595 { 00:42:59.595 "name": "BaseBdev1", 00:42:59.595 "uuid": "372965d5-1533-48cf-8dfc-68f68d99de3c", 00:42:59.595 "is_configured": true, 00:42:59.595 "data_offset": 256, 00:42:59.595 "data_size": 7936 00:42:59.595 }, 00:42:59.595 { 00:42:59.595 "name": "BaseBdev2", 00:42:59.595 "uuid": "3127b88b-ce60-4751-8319-faff2ec1d668", 00:42:59.595 "is_configured": true, 00:42:59.595 "data_offset": 256, 00:42:59.595 "data_size": 7936 00:42:59.595 } 00:42:59.595 ] 00:42:59.595 }' 00:42:59.595 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:59.595 23:24:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:59.854 [2024-12-09 23:24:40.384785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:59.854 "name": "Existed_Raid", 00:42:59.854 "aliases": [ 00:42:59.854 "ebde1213-7811-4a7c-a5ca-45417e522929" 00:42:59.854 ], 00:42:59.854 "product_name": "Raid Volume", 00:42:59.854 "block_size": 4128, 00:42:59.854 "num_blocks": 7936, 00:42:59.854 "uuid": "ebde1213-7811-4a7c-a5ca-45417e522929", 00:42:59.854 "md_size": 32, 00:42:59.854 "md_interleave": true, 00:42:59.854 "dif_type": 0, 00:42:59.854 "assigned_rate_limits": { 00:42:59.854 "rw_ios_per_sec": 0, 00:42:59.854 "rw_mbytes_per_sec": 0, 00:42:59.854 "r_mbytes_per_sec": 0, 00:42:59.854 "w_mbytes_per_sec": 0 00:42:59.854 }, 00:42:59.854 "claimed": false, 00:42:59.854 "zoned": false, 00:42:59.854 "supported_io_types": { 00:42:59.854 "read": true, 00:42:59.854 "write": true, 00:42:59.854 "unmap": false, 00:42:59.854 "flush": false, 00:42:59.854 "reset": true, 00:42:59.854 "nvme_admin": false, 00:42:59.854 "nvme_io": false, 00:42:59.854 "nvme_io_md": false, 00:42:59.854 "write_zeroes": true, 00:42:59.854 "zcopy": false, 00:42:59.854 "get_zone_info": false, 00:42:59.854 "zone_management": false, 00:42:59.854 "zone_append": false, 00:42:59.854 "compare": false, 00:42:59.854 "compare_and_write": false, 00:42:59.854 "abort": false, 00:42:59.854 "seek_hole": false, 00:42:59.854 "seek_data": false, 00:42:59.854 "copy": false, 00:42:59.854 "nvme_iov_md": false 00:42:59.854 }, 00:42:59.854 "memory_domains": [ 00:42:59.854 { 00:42:59.854 "dma_device_id": "system", 00:42:59.854 "dma_device_type": 1 00:42:59.854 }, 00:42:59.854 { 00:42:59.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:59.854 "dma_device_type": 2 00:42:59.854 }, 00:42:59.854 { 00:42:59.854 "dma_device_id": "system", 00:42:59.854 "dma_device_type": 1 00:42:59.854 }, 00:42:59.854 { 00:42:59.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:59.854 "dma_device_type": 2 00:42:59.854 } 00:42:59.854 ], 00:42:59.854 "driver_specific": { 00:42:59.854 "raid": { 00:42:59.854 "uuid": "ebde1213-7811-4a7c-a5ca-45417e522929", 00:42:59.854 "strip_size_kb": 0, 00:42:59.854 "state": "online", 00:42:59.854 "raid_level": "raid1", 00:42:59.854 "superblock": true, 00:42:59.854 "num_base_bdevs": 2, 00:42:59.854 "num_base_bdevs_discovered": 2, 00:42:59.854 "num_base_bdevs_operational": 2, 00:42:59.854 "base_bdevs_list": [ 00:42:59.854 { 00:42:59.854 "name": "BaseBdev1", 00:42:59.854 "uuid": "372965d5-1533-48cf-8dfc-68f68d99de3c", 00:42:59.854 "is_configured": true, 00:42:59.854 "data_offset": 256, 00:42:59.854 "data_size": 7936 00:42:59.854 }, 00:42:59.854 { 00:42:59.854 "name": "BaseBdev2", 00:42:59.854 "uuid": "3127b88b-ce60-4751-8319-faff2ec1d668", 00:42:59.854 "is_configured": true, 00:42:59.854 "data_offset": 256, 00:42:59.854 "data_size": 7936 00:42:59.854 } 00:42:59.854 ] 00:42:59.854 } 00:42:59.854 } 00:42:59.854 }' 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:42:59.854 BaseBdev2' 00:42:59.854 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:00.113 [2024-12-09 23:24:40.592432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:00.113 "name": "Existed_Raid", 00:43:00.113 "uuid": "ebde1213-7811-4a7c-a5ca-45417e522929", 00:43:00.113 "strip_size_kb": 0, 00:43:00.113 "state": "online", 00:43:00.113 "raid_level": "raid1", 00:43:00.113 "superblock": true, 00:43:00.113 "num_base_bdevs": 2, 00:43:00.113 "num_base_bdevs_discovered": 1, 00:43:00.113 "num_base_bdevs_operational": 1, 00:43:00.113 "base_bdevs_list": [ 00:43:00.113 { 00:43:00.113 "name": null, 00:43:00.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:00.113 "is_configured": false, 00:43:00.113 "data_offset": 0, 00:43:00.113 "data_size": 7936 00:43:00.113 }, 00:43:00.113 { 00:43:00.113 "name": "BaseBdev2", 00:43:00.113 "uuid": "3127b88b-ce60-4751-8319-faff2ec1d668", 00:43:00.113 "is_configured": true, 00:43:00.113 "data_offset": 256, 00:43:00.113 "data_size": 7936 00:43:00.113 } 00:43:00.113 ] 00:43:00.113 }' 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:00.113 23:24:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:00.680 [2024-12-09 23:24:41.172798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:43:00.680 [2024-12-09 23:24:41.172916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:00.680 [2024-12-09 23:24:41.268642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:00.680 [2024-12-09 23:24:41.268699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:00.680 [2024-12-09 23:24:41.268714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:00.680 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88335 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88335 ']' 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88335 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88335 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:00.939 killing process with pid 88335 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88335' 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88335 00:43:00.939 [2024-12-09 23:24:41.364647] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:00.939 23:24:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88335 00:43:00.939 [2024-12-09 23:24:41.382005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:02.315 23:24:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:43:02.315 00:43:02.315 real 0m4.993s 00:43:02.315 user 0m7.160s 00:43:02.315 sys 0m0.935s 00:43:02.315 23:24:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:02.315 23:24:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:02.315 ************************************ 00:43:02.315 END TEST raid_state_function_test_sb_md_interleaved 00:43:02.315 ************************************ 00:43:02.315 23:24:42 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:43:02.315 23:24:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:02.315 23:24:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:02.315 23:24:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:02.315 ************************************ 00:43:02.315 START TEST raid_superblock_test_md_interleaved 00:43:02.315 ************************************ 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88576 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88576 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88576 ']' 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:02.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:02.315 23:24:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:02.315 [2024-12-09 23:24:42.702984] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:02.315 [2024-12-09 23:24:42.703121] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88576 ] 00:43:02.315 [2024-12-09 23:24:42.884269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.574 [2024-12-09 23:24:43.008316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.833 [2024-12-09 23:24:43.219728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:02.833 [2024-12-09 23:24:43.219777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.091 malloc1 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.091 [2024-12-09 23:24:43.609095] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:03.091 [2024-12-09 23:24:43.609156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:03.091 [2024-12-09 23:24:43.609188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:43:03.091 [2024-12-09 23:24:43.609201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:03.091 [2024-12-09 23:24:43.611341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:03.091 [2024-12-09 23:24:43.611384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:03.091 pt1 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:03.091 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.092 malloc2 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.092 [2024-12-09 23:24:43.668591] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:03.092 [2024-12-09 23:24:43.668670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:03.092 [2024-12-09 23:24:43.668703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:43:03.092 [2024-12-09 23:24:43.668721] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:03.092 [2024-12-09 23:24:43.670994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:03.092 [2024-12-09 23:24:43.671036] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:03.092 pt2 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.092 [2024-12-09 23:24:43.680585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:03.092 [2024-12-09 23:24:43.682732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:03.092 [2024-12-09 23:24:43.682921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:43:03.092 [2024-12-09 23:24:43.682935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:03.092 [2024-12-09 23:24:43.683015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:43:03.092 [2024-12-09 23:24:43.683087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:43:03.092 [2024-12-09 23:24:43.683101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:43:03.092 [2024-12-09 23:24:43.683168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.092 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.350 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:03.350 "name": "raid_bdev1", 00:43:03.350 "uuid": "acc36af3-ae82-4cba-96f8-9154e24f5573", 00:43:03.350 "strip_size_kb": 0, 00:43:03.350 "state": "online", 00:43:03.350 "raid_level": "raid1", 00:43:03.350 "superblock": true, 00:43:03.350 "num_base_bdevs": 2, 00:43:03.350 "num_base_bdevs_discovered": 2, 00:43:03.350 "num_base_bdevs_operational": 2, 00:43:03.350 "base_bdevs_list": [ 00:43:03.350 { 00:43:03.350 "name": "pt1", 00:43:03.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:03.350 "is_configured": true, 00:43:03.350 "data_offset": 256, 00:43:03.350 "data_size": 7936 00:43:03.350 }, 00:43:03.350 { 00:43:03.350 "name": "pt2", 00:43:03.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:03.350 "is_configured": true, 00:43:03.350 "data_offset": 256, 00:43:03.350 "data_size": 7936 00:43:03.350 } 00:43:03.350 ] 00:43:03.350 }' 00:43:03.350 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:03.350 23:24:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:03.608 [2024-12-09 23:24:44.120265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:03.608 "name": "raid_bdev1", 00:43:03.608 "aliases": [ 00:43:03.608 "acc36af3-ae82-4cba-96f8-9154e24f5573" 00:43:03.608 ], 00:43:03.608 "product_name": "Raid Volume", 00:43:03.608 "block_size": 4128, 00:43:03.608 "num_blocks": 7936, 00:43:03.608 "uuid": "acc36af3-ae82-4cba-96f8-9154e24f5573", 00:43:03.608 "md_size": 32, 00:43:03.608 "md_interleave": true, 00:43:03.608 "dif_type": 0, 00:43:03.608 "assigned_rate_limits": { 00:43:03.608 "rw_ios_per_sec": 0, 00:43:03.608 "rw_mbytes_per_sec": 0, 00:43:03.608 "r_mbytes_per_sec": 0, 00:43:03.608 "w_mbytes_per_sec": 0 00:43:03.608 }, 00:43:03.608 "claimed": false, 00:43:03.608 "zoned": false, 00:43:03.608 "supported_io_types": { 00:43:03.608 "read": true, 00:43:03.608 "write": true, 00:43:03.608 "unmap": false, 00:43:03.608 "flush": false, 00:43:03.608 "reset": true, 00:43:03.608 "nvme_admin": false, 00:43:03.608 "nvme_io": false, 00:43:03.608 "nvme_io_md": false, 00:43:03.608 "write_zeroes": true, 00:43:03.608 "zcopy": false, 00:43:03.608 "get_zone_info": false, 00:43:03.608 "zone_management": false, 00:43:03.608 "zone_append": false, 00:43:03.608 "compare": false, 00:43:03.608 "compare_and_write": false, 00:43:03.608 "abort": false, 00:43:03.608 "seek_hole": false, 00:43:03.608 "seek_data": false, 00:43:03.608 "copy": false, 00:43:03.608 "nvme_iov_md": false 00:43:03.608 }, 00:43:03.608 "memory_domains": [ 00:43:03.608 { 00:43:03.608 "dma_device_id": "system", 00:43:03.608 "dma_device_type": 1 00:43:03.608 }, 00:43:03.608 { 00:43:03.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:03.608 "dma_device_type": 2 00:43:03.608 }, 00:43:03.608 { 00:43:03.608 "dma_device_id": "system", 00:43:03.608 "dma_device_type": 1 00:43:03.608 }, 00:43:03.608 { 00:43:03.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:03.608 "dma_device_type": 2 00:43:03.608 } 00:43:03.608 ], 00:43:03.608 "driver_specific": { 00:43:03.608 "raid": { 00:43:03.608 "uuid": "acc36af3-ae82-4cba-96f8-9154e24f5573", 00:43:03.608 "strip_size_kb": 0, 00:43:03.608 "state": "online", 00:43:03.608 "raid_level": "raid1", 00:43:03.608 "superblock": true, 00:43:03.608 "num_base_bdevs": 2, 00:43:03.608 "num_base_bdevs_discovered": 2, 00:43:03.608 "num_base_bdevs_operational": 2, 00:43:03.608 "base_bdevs_list": [ 00:43:03.608 { 00:43:03.608 "name": "pt1", 00:43:03.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:03.608 "is_configured": true, 00:43:03.608 "data_offset": 256, 00:43:03.608 "data_size": 7936 00:43:03.608 }, 00:43:03.608 { 00:43:03.608 "name": "pt2", 00:43:03.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:03.608 "is_configured": true, 00:43:03.608 "data_offset": 256, 00:43:03.608 "data_size": 7936 00:43:03.608 } 00:43:03.608 ] 00:43:03.608 } 00:43:03.608 } 00:43:03.608 }' 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:43:03.608 pt2' 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:03.608 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:43:03.609 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:03.609 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:03.609 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:43:03.609 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.609 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.867 [2024-12-09 23:24:44.323959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=acc36af3-ae82-4cba-96f8-9154e24f5573 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z acc36af3-ae82-4cba-96f8-9154e24f5573 ']' 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.867 [2024-12-09 23:24:44.359593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:03.867 [2024-12-09 23:24:44.359623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:03.867 [2024-12-09 23:24:44.359709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:03.867 [2024-12-09 23:24:44.359768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:03.867 [2024-12-09 23:24:44.359782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.867 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:03.868 [2024-12-09 23:24:44.479471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:43:03.868 [2024-12-09 23:24:44.481606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:43:03.868 [2024-12-09 23:24:44.481688] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:43:03.868 [2024-12-09 23:24:44.481746] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:43:03.868 [2024-12-09 23:24:44.481764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:03.868 [2024-12-09 23:24:44.481778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:43:03.868 request: 00:43:03.868 { 00:43:03.868 "name": "raid_bdev1", 00:43:03.868 "raid_level": "raid1", 00:43:03.868 "base_bdevs": [ 00:43:03.868 "malloc1", 00:43:03.868 "malloc2" 00:43:03.868 ], 00:43:03.868 "superblock": false, 00:43:03.868 "method": "bdev_raid_create", 00:43:03.868 "req_id": 1 00:43:03.868 } 00:43:03.868 Got JSON-RPC error response 00:43:03.868 response: 00:43:03.868 { 00:43:03.868 "code": -17, 00:43:03.868 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:43:03.868 } 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.868 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:04.127 [2024-12-09 23:24:44.543335] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:04.127 [2024-12-09 23:24:44.543409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:04.127 [2024-12-09 23:24:44.543429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:43:04.127 [2024-12-09 23:24:44.543442] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:04.127 [2024-12-09 23:24:44.545723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:04.127 [2024-12-09 23:24:44.545778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:04.127 [2024-12-09 23:24:44.545865] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:43:04.127 [2024-12-09 23:24:44.545941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:04.127 pt1 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:04.127 "name": "raid_bdev1", 00:43:04.127 "uuid": "acc36af3-ae82-4cba-96f8-9154e24f5573", 00:43:04.127 "strip_size_kb": 0, 00:43:04.127 "state": "configuring", 00:43:04.127 "raid_level": "raid1", 00:43:04.127 "superblock": true, 00:43:04.127 "num_base_bdevs": 2, 00:43:04.127 "num_base_bdevs_discovered": 1, 00:43:04.127 "num_base_bdevs_operational": 2, 00:43:04.127 "base_bdevs_list": [ 00:43:04.127 { 00:43:04.127 "name": "pt1", 00:43:04.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:04.127 "is_configured": true, 00:43:04.127 "data_offset": 256, 00:43:04.127 "data_size": 7936 00:43:04.127 }, 00:43:04.127 { 00:43:04.127 "name": null, 00:43:04.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:04.127 "is_configured": false, 00:43:04.127 "data_offset": 256, 00:43:04.127 "data_size": 7936 00:43:04.127 } 00:43:04.127 ] 00:43:04.127 }' 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:04.127 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:04.385 [2024-12-09 23:24:44.978741] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:04.385 [2024-12-09 23:24:44.978823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:04.385 [2024-12-09 23:24:44.978847] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:43:04.385 [2024-12-09 23:24:44.978862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:04.385 [2024-12-09 23:24:44.979040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:04.385 [2024-12-09 23:24:44.979060] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:04.385 [2024-12-09 23:24:44.979118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:43:04.385 [2024-12-09 23:24:44.979144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:04.385 [2024-12-09 23:24:44.979229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:43:04.385 [2024-12-09 23:24:44.979243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:04.385 [2024-12-09 23:24:44.979312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:43:04.385 [2024-12-09 23:24:44.979376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:43:04.385 [2024-12-09 23:24:44.979385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:43:04.385 [2024-12-09 23:24:44.979477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:04.385 pt2 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.385 23:24:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:04.385 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.650 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:04.650 "name": "raid_bdev1", 00:43:04.650 "uuid": "acc36af3-ae82-4cba-96f8-9154e24f5573", 00:43:04.650 "strip_size_kb": 0, 00:43:04.650 "state": "online", 00:43:04.650 "raid_level": "raid1", 00:43:04.650 "superblock": true, 00:43:04.650 "num_base_bdevs": 2, 00:43:04.650 "num_base_bdevs_discovered": 2, 00:43:04.650 "num_base_bdevs_operational": 2, 00:43:04.650 "base_bdevs_list": [ 00:43:04.650 { 00:43:04.650 "name": "pt1", 00:43:04.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:04.650 "is_configured": true, 00:43:04.650 "data_offset": 256, 00:43:04.650 "data_size": 7936 00:43:04.650 }, 00:43:04.650 { 00:43:04.650 "name": "pt2", 00:43:04.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:04.650 "is_configured": true, 00:43:04.650 "data_offset": 256, 00:43:04.650 "data_size": 7936 00:43:04.650 } 00:43:04.650 ] 00:43:04.650 }' 00:43:04.650 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:04.650 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:04.908 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:04.909 [2024-12-09 23:24:45.414808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:04.909 "name": "raid_bdev1", 00:43:04.909 "aliases": [ 00:43:04.909 "acc36af3-ae82-4cba-96f8-9154e24f5573" 00:43:04.909 ], 00:43:04.909 "product_name": "Raid Volume", 00:43:04.909 "block_size": 4128, 00:43:04.909 "num_blocks": 7936, 00:43:04.909 "uuid": "acc36af3-ae82-4cba-96f8-9154e24f5573", 00:43:04.909 "md_size": 32, 00:43:04.909 "md_interleave": true, 00:43:04.909 "dif_type": 0, 00:43:04.909 "assigned_rate_limits": { 00:43:04.909 "rw_ios_per_sec": 0, 00:43:04.909 "rw_mbytes_per_sec": 0, 00:43:04.909 "r_mbytes_per_sec": 0, 00:43:04.909 "w_mbytes_per_sec": 0 00:43:04.909 }, 00:43:04.909 "claimed": false, 00:43:04.909 "zoned": false, 00:43:04.909 "supported_io_types": { 00:43:04.909 "read": true, 00:43:04.909 "write": true, 00:43:04.909 "unmap": false, 00:43:04.909 "flush": false, 00:43:04.909 "reset": true, 00:43:04.909 "nvme_admin": false, 00:43:04.909 "nvme_io": false, 00:43:04.909 "nvme_io_md": false, 00:43:04.909 "write_zeroes": true, 00:43:04.909 "zcopy": false, 00:43:04.909 "get_zone_info": false, 00:43:04.909 "zone_management": false, 00:43:04.909 "zone_append": false, 00:43:04.909 "compare": false, 00:43:04.909 "compare_and_write": false, 00:43:04.909 "abort": false, 00:43:04.909 "seek_hole": false, 00:43:04.909 "seek_data": false, 00:43:04.909 "copy": false, 00:43:04.909 "nvme_iov_md": false 00:43:04.909 }, 00:43:04.909 "memory_domains": [ 00:43:04.909 { 00:43:04.909 "dma_device_id": "system", 00:43:04.909 "dma_device_type": 1 00:43:04.909 }, 00:43:04.909 { 00:43:04.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:04.909 "dma_device_type": 2 00:43:04.909 }, 00:43:04.909 { 00:43:04.909 "dma_device_id": "system", 00:43:04.909 "dma_device_type": 1 00:43:04.909 }, 00:43:04.909 { 00:43:04.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:04.909 "dma_device_type": 2 00:43:04.909 } 00:43:04.909 ], 00:43:04.909 "driver_specific": { 00:43:04.909 "raid": { 00:43:04.909 "uuid": "acc36af3-ae82-4cba-96f8-9154e24f5573", 00:43:04.909 "strip_size_kb": 0, 00:43:04.909 "state": "online", 00:43:04.909 "raid_level": "raid1", 00:43:04.909 "superblock": true, 00:43:04.909 "num_base_bdevs": 2, 00:43:04.909 "num_base_bdevs_discovered": 2, 00:43:04.909 "num_base_bdevs_operational": 2, 00:43:04.909 "base_bdevs_list": [ 00:43:04.909 { 00:43:04.909 "name": "pt1", 00:43:04.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:04.909 "is_configured": true, 00:43:04.909 "data_offset": 256, 00:43:04.909 "data_size": 7936 00:43:04.909 }, 00:43:04.909 { 00:43:04.909 "name": "pt2", 00:43:04.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:04.909 "is_configured": true, 00:43:04.909 "data_offset": 256, 00:43:04.909 "data_size": 7936 00:43:04.909 } 00:43:04.909 ] 00:43:04.909 } 00:43:04.909 } 00:43:04.909 }' 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:43:04.909 pt2' 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.909 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.167 [2024-12-09 23:24:45.622531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' acc36af3-ae82-4cba-96f8-9154e24f5573 '!=' acc36af3-ae82-4cba-96f8-9154e24f5573 ']' 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.167 [2024-12-09 23:24:45.662209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:05.167 "name": "raid_bdev1", 00:43:05.167 "uuid": "acc36af3-ae82-4cba-96f8-9154e24f5573", 00:43:05.167 "strip_size_kb": 0, 00:43:05.167 "state": "online", 00:43:05.167 "raid_level": "raid1", 00:43:05.167 "superblock": true, 00:43:05.167 "num_base_bdevs": 2, 00:43:05.167 "num_base_bdevs_discovered": 1, 00:43:05.167 "num_base_bdevs_operational": 1, 00:43:05.167 "base_bdevs_list": [ 00:43:05.167 { 00:43:05.167 "name": null, 00:43:05.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:05.167 "is_configured": false, 00:43:05.167 "data_offset": 0, 00:43:05.167 "data_size": 7936 00:43:05.167 }, 00:43:05.167 { 00:43:05.167 "name": "pt2", 00:43:05.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:05.167 "is_configured": true, 00:43:05.167 "data_offset": 256, 00:43:05.167 "data_size": 7936 00:43:05.167 } 00:43:05.167 ] 00:43:05.167 }' 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:05.167 23:24:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.734 [2024-12-09 23:24:46.117549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:05.734 [2024-12-09 23:24:46.117583] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:05.734 [2024-12-09 23:24:46.117669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:05.734 [2024-12-09 23:24:46.117717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:05.734 [2024-12-09 23:24:46.117732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.734 [2024-12-09 23:24:46.193467] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:05.734 [2024-12-09 23:24:46.193558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:05.734 [2024-12-09 23:24:46.193591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:43:05.734 [2024-12-09 23:24:46.193605] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:05.734 [2024-12-09 23:24:46.195860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:05.734 [2024-12-09 23:24:46.195906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:05.734 [2024-12-09 23:24:46.195964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:43:05.734 [2024-12-09 23:24:46.196021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:05.734 [2024-12-09 23:24:46.196090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:43:05.734 [2024-12-09 23:24:46.196104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:05.734 [2024-12-09 23:24:46.196193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:43:05.734 [2024-12-09 23:24:46.196254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:43:05.734 [2024-12-09 23:24:46.196262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:43:05.734 [2024-12-09 23:24:46.196327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:05.734 pt2 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:05.734 "name": "raid_bdev1", 00:43:05.734 "uuid": "acc36af3-ae82-4cba-96f8-9154e24f5573", 00:43:05.734 "strip_size_kb": 0, 00:43:05.734 "state": "online", 00:43:05.734 "raid_level": "raid1", 00:43:05.734 "superblock": true, 00:43:05.734 "num_base_bdevs": 2, 00:43:05.734 "num_base_bdevs_discovered": 1, 00:43:05.734 "num_base_bdevs_operational": 1, 00:43:05.734 "base_bdevs_list": [ 00:43:05.734 { 00:43:05.734 "name": null, 00:43:05.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:05.734 "is_configured": false, 00:43:05.734 "data_offset": 256, 00:43:05.734 "data_size": 7936 00:43:05.734 }, 00:43:05.734 { 00:43:05.734 "name": "pt2", 00:43:05.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:05.734 "is_configured": true, 00:43:05.734 "data_offset": 256, 00:43:05.734 "data_size": 7936 00:43:05.734 } 00:43:05.734 ] 00:43:05.734 }' 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:05.734 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.993 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:05.993 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.993 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.993 [2024-12-09 23:24:46.588879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:05.993 [2024-12-09 23:24:46.588919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:05.993 [2024-12-09 23:24:46.588998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:05.993 [2024-12-09 23:24:46.589054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:05.993 [2024-12-09 23:24:46.589067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:43:05.993 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.993 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:05.993 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.993 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:05.993 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:43:05.993 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:06.252 [2024-12-09 23:24:46.648808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:06.252 [2024-12-09 23:24:46.648875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:06.252 [2024-12-09 23:24:46.648902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:43:06.252 [2024-12-09 23:24:46.648913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:06.252 [2024-12-09 23:24:46.651103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:06.252 [2024-12-09 23:24:46.651144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:06.252 [2024-12-09 23:24:46.651202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:43:06.252 [2024-12-09 23:24:46.651256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:06.252 [2024-12-09 23:24:46.651356] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:43:06.252 [2024-12-09 23:24:46.651368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:06.252 [2024-12-09 23:24:46.651387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:43:06.252 [2024-12-09 23:24:46.651466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:06.252 [2024-12-09 23:24:46.651540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:43:06.252 [2024-12-09 23:24:46.651550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:06.252 [2024-12-09 23:24:46.651624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:43:06.252 [2024-12-09 23:24:46.651679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:43:06.252 [2024-12-09 23:24:46.651690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:43:06.252 [2024-12-09 23:24:46.651755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:06.252 pt1 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:06.252 "name": "raid_bdev1", 00:43:06.252 "uuid": "acc36af3-ae82-4cba-96f8-9154e24f5573", 00:43:06.252 "strip_size_kb": 0, 00:43:06.252 "state": "online", 00:43:06.252 "raid_level": "raid1", 00:43:06.252 "superblock": true, 00:43:06.252 "num_base_bdevs": 2, 00:43:06.252 "num_base_bdevs_discovered": 1, 00:43:06.252 "num_base_bdevs_operational": 1, 00:43:06.252 "base_bdevs_list": [ 00:43:06.252 { 00:43:06.252 "name": null, 00:43:06.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:06.252 "is_configured": false, 00:43:06.252 "data_offset": 256, 00:43:06.252 "data_size": 7936 00:43:06.252 }, 00:43:06.252 { 00:43:06.252 "name": "pt2", 00:43:06.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:06.252 "is_configured": true, 00:43:06.252 "data_offset": 256, 00:43:06.252 "data_size": 7936 00:43:06.252 } 00:43:06.252 ] 00:43:06.252 }' 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:06.252 23:24:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:06.510 [2024-12-09 23:24:47.112371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' acc36af3-ae82-4cba-96f8-9154e24f5573 '!=' acc36af3-ae82-4cba-96f8-9154e24f5573 ']' 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88576 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88576 ']' 00:43:06.510 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88576 00:43:06.768 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:43:06.768 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:06.768 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88576 00:43:06.768 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:06.768 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:06.768 killing process with pid 88576 00:43:06.768 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88576' 00:43:06.768 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88576 00:43:06.768 [2024-12-09 23:24:47.189631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:06.768 [2024-12-09 23:24:47.189714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:06.768 [2024-12-09 23:24:47.189763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:06.768 [2024-12-09 23:24:47.189781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:43:06.768 23:24:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88576 00:43:06.768 [2024-12-09 23:24:47.398716] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:08.198 23:24:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:43:08.198 00:43:08.198 real 0m5.928s 00:43:08.198 user 0m8.913s 00:43:08.198 sys 0m1.176s 00:43:08.198 23:24:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:08.198 23:24:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:08.198 ************************************ 00:43:08.198 END TEST raid_superblock_test_md_interleaved 00:43:08.198 ************************************ 00:43:08.198 23:24:48 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:43:08.198 23:24:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:43:08.198 23:24:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:08.198 23:24:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:08.198 ************************************ 00:43:08.198 START TEST raid_rebuild_test_sb_md_interleaved 00:43:08.198 ************************************ 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88899 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88899 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88899 ']' 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:08.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:08.198 23:24:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:08.198 I/O size of 3145728 is greater than zero copy threshold (65536). 00:43:08.198 Zero copy mechanism will not be used. 00:43:08.198 [2024-12-09 23:24:48.706972] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:08.198 [2024-12-09 23:24:48.707096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88899 ] 00:43:08.457 [2024-12-09 23:24:48.891285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.457 [2024-12-09 23:24:49.011100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:08.716 [2024-12-09 23:24:49.224134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:08.716 [2024-12-09 23:24:49.224196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:08.975 BaseBdev1_malloc 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:08.975 [2024-12-09 23:24:49.585230] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:43:08.975 [2024-12-09 23:24:49.585298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:08.975 [2024-12-09 23:24:49.585323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:43:08.975 [2024-12-09 23:24:49.585338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:08.975 [2024-12-09 23:24:49.587470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:08.975 [2024-12-09 23:24:49.587518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:43:08.975 BaseBdev1 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.975 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.235 BaseBdev2_malloc 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.235 [2024-12-09 23:24:49.643679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:43:09.235 [2024-12-09 23:24:49.643745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:09.235 [2024-12-09 23:24:49.643766] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:43:09.235 [2024-12-09 23:24:49.643782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:09.235 [2024-12-09 23:24:49.645853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:09.235 [2024-12-09 23:24:49.645896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:43:09.235 BaseBdev2 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.235 spare_malloc 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.235 spare_delay 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.235 [2024-12-09 23:24:49.724570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:09.235 [2024-12-09 23:24:49.724630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:09.235 [2024-12-09 23:24:49.724652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:43:09.235 [2024-12-09 23:24:49.724666] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:09.235 [2024-12-09 23:24:49.726802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:09.235 [2024-12-09 23:24:49.726847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:09.235 spare 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:43:09.235 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.236 [2024-12-09 23:24:49.736600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:09.236 [2024-12-09 23:24:49.738672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:09.236 [2024-12-09 23:24:49.738868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:43:09.236 [2024-12-09 23:24:49.738888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:09.236 [2024-12-09 23:24:49.738956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:43:09.236 [2024-12-09 23:24:49.739028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:43:09.236 [2024-12-09 23:24:49.739037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:43:09.236 [2024-12-09 23:24:49.739105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:09.236 "name": "raid_bdev1", 00:43:09.236 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:09.236 "strip_size_kb": 0, 00:43:09.236 "state": "online", 00:43:09.236 "raid_level": "raid1", 00:43:09.236 "superblock": true, 00:43:09.236 "num_base_bdevs": 2, 00:43:09.236 "num_base_bdevs_discovered": 2, 00:43:09.236 "num_base_bdevs_operational": 2, 00:43:09.236 "base_bdevs_list": [ 00:43:09.236 { 00:43:09.236 "name": "BaseBdev1", 00:43:09.236 "uuid": "353c4741-8aed-55c5-a8ad-c9ddbffadf8c", 00:43:09.236 "is_configured": true, 00:43:09.236 "data_offset": 256, 00:43:09.236 "data_size": 7936 00:43:09.236 }, 00:43:09.236 { 00:43:09.236 "name": "BaseBdev2", 00:43:09.236 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:09.236 "is_configured": true, 00:43:09.236 "data_offset": 256, 00:43:09.236 "data_size": 7936 00:43:09.236 } 00:43:09.236 ] 00:43:09.236 }' 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:09.236 23:24:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.803 [2024-12-09 23:24:50.148457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.803 [2024-12-09 23:24:50.231998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:09.803 "name": "raid_bdev1", 00:43:09.803 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:09.803 "strip_size_kb": 0, 00:43:09.803 "state": "online", 00:43:09.803 "raid_level": "raid1", 00:43:09.803 "superblock": true, 00:43:09.803 "num_base_bdevs": 2, 00:43:09.803 "num_base_bdevs_discovered": 1, 00:43:09.803 "num_base_bdevs_operational": 1, 00:43:09.803 "base_bdevs_list": [ 00:43:09.803 { 00:43:09.803 "name": null, 00:43:09.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:09.803 "is_configured": false, 00:43:09.803 "data_offset": 0, 00:43:09.803 "data_size": 7936 00:43:09.803 }, 00:43:09.803 { 00:43:09.803 "name": "BaseBdev2", 00:43:09.803 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:09.803 "is_configured": true, 00:43:09.803 "data_offset": 256, 00:43:09.803 "data_size": 7936 00:43:09.803 } 00:43:09.803 ] 00:43:09.803 }' 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:09.803 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:10.061 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:10.061 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.061 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:10.061 [2024-12-09 23:24:50.659457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:10.061 [2024-12-09 23:24:50.677741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:43:10.061 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.061 23:24:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:43:10.061 [2024-12-09 23:24:50.680167] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.436 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:11.436 "name": "raid_bdev1", 00:43:11.436 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:11.436 "strip_size_kb": 0, 00:43:11.436 "state": "online", 00:43:11.436 "raid_level": "raid1", 00:43:11.436 "superblock": true, 00:43:11.436 "num_base_bdevs": 2, 00:43:11.437 "num_base_bdevs_discovered": 2, 00:43:11.437 "num_base_bdevs_operational": 2, 00:43:11.437 "process": { 00:43:11.437 "type": "rebuild", 00:43:11.437 "target": "spare", 00:43:11.437 "progress": { 00:43:11.437 "blocks": 2560, 00:43:11.437 "percent": 32 00:43:11.437 } 00:43:11.437 }, 00:43:11.437 "base_bdevs_list": [ 00:43:11.437 { 00:43:11.437 "name": "spare", 00:43:11.437 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:11.437 "is_configured": true, 00:43:11.437 "data_offset": 256, 00:43:11.437 "data_size": 7936 00:43:11.437 }, 00:43:11.437 { 00:43:11.437 "name": "BaseBdev2", 00:43:11.437 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:11.437 "is_configured": true, 00:43:11.437 "data_offset": 256, 00:43:11.437 "data_size": 7936 00:43:11.437 } 00:43:11.437 ] 00:43:11.437 }' 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:11.437 [2024-12-09 23:24:51.815237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:11.437 [2024-12-09 23:24:51.885834] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:11.437 [2024-12-09 23:24:51.885933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:11.437 [2024-12-09 23:24:51.885964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:11.437 [2024-12-09 23:24:51.885989] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:11.437 "name": "raid_bdev1", 00:43:11.437 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:11.437 "strip_size_kb": 0, 00:43:11.437 "state": "online", 00:43:11.437 "raid_level": "raid1", 00:43:11.437 "superblock": true, 00:43:11.437 "num_base_bdevs": 2, 00:43:11.437 "num_base_bdevs_discovered": 1, 00:43:11.437 "num_base_bdevs_operational": 1, 00:43:11.437 "base_bdevs_list": [ 00:43:11.437 { 00:43:11.437 "name": null, 00:43:11.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:11.437 "is_configured": false, 00:43:11.437 "data_offset": 0, 00:43:11.437 "data_size": 7936 00:43:11.437 }, 00:43:11.437 { 00:43:11.437 "name": "BaseBdev2", 00:43:11.437 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:11.437 "is_configured": true, 00:43:11.437 "data_offset": 256, 00:43:11.437 "data_size": 7936 00:43:11.437 } 00:43:11.437 ] 00:43:11.437 }' 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:11.437 23:24:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:12.003 "name": "raid_bdev1", 00:43:12.003 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:12.003 "strip_size_kb": 0, 00:43:12.003 "state": "online", 00:43:12.003 "raid_level": "raid1", 00:43:12.003 "superblock": true, 00:43:12.003 "num_base_bdevs": 2, 00:43:12.003 "num_base_bdevs_discovered": 1, 00:43:12.003 "num_base_bdevs_operational": 1, 00:43:12.003 "base_bdevs_list": [ 00:43:12.003 { 00:43:12.003 "name": null, 00:43:12.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:12.003 "is_configured": false, 00:43:12.003 "data_offset": 0, 00:43:12.003 "data_size": 7936 00:43:12.003 }, 00:43:12.003 { 00:43:12.003 "name": "BaseBdev2", 00:43:12.003 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:12.003 "is_configured": true, 00:43:12.003 "data_offset": 256, 00:43:12.003 "data_size": 7936 00:43:12.003 } 00:43:12.003 ] 00:43:12.003 }' 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:12.003 [2024-12-09 23:24:52.478571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:12.003 [2024-12-09 23:24:52.495053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.003 23:24:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:43:12.003 [2024-12-09 23:24:52.497241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:12.936 "name": "raid_bdev1", 00:43:12.936 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:12.936 "strip_size_kb": 0, 00:43:12.936 "state": "online", 00:43:12.936 "raid_level": "raid1", 00:43:12.936 "superblock": true, 00:43:12.936 "num_base_bdevs": 2, 00:43:12.936 "num_base_bdevs_discovered": 2, 00:43:12.936 "num_base_bdevs_operational": 2, 00:43:12.936 "process": { 00:43:12.936 "type": "rebuild", 00:43:12.936 "target": "spare", 00:43:12.936 "progress": { 00:43:12.936 "blocks": 2560, 00:43:12.936 "percent": 32 00:43:12.936 } 00:43:12.936 }, 00:43:12.936 "base_bdevs_list": [ 00:43:12.936 { 00:43:12.936 "name": "spare", 00:43:12.936 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:12.936 "is_configured": true, 00:43:12.936 "data_offset": 256, 00:43:12.936 "data_size": 7936 00:43:12.936 }, 00:43:12.936 { 00:43:12.936 "name": "BaseBdev2", 00:43:12.936 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:12.936 "is_configured": true, 00:43:12.936 "data_offset": 256, 00:43:12.936 "data_size": 7936 00:43:12.936 } 00:43:12.936 ] 00:43:12.936 }' 00:43:12.936 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:43:13.195 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=743 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.195 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:13.195 "name": "raid_bdev1", 00:43:13.195 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:13.195 "strip_size_kb": 0, 00:43:13.195 "state": "online", 00:43:13.195 "raid_level": "raid1", 00:43:13.195 "superblock": true, 00:43:13.195 "num_base_bdevs": 2, 00:43:13.195 "num_base_bdevs_discovered": 2, 00:43:13.195 "num_base_bdevs_operational": 2, 00:43:13.195 "process": { 00:43:13.195 "type": "rebuild", 00:43:13.195 "target": "spare", 00:43:13.195 "progress": { 00:43:13.195 "blocks": 2816, 00:43:13.195 "percent": 35 00:43:13.195 } 00:43:13.195 }, 00:43:13.195 "base_bdevs_list": [ 00:43:13.195 { 00:43:13.196 "name": "spare", 00:43:13.196 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:13.196 "is_configured": true, 00:43:13.196 "data_offset": 256, 00:43:13.196 "data_size": 7936 00:43:13.196 }, 00:43:13.196 { 00:43:13.196 "name": "BaseBdev2", 00:43:13.196 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:13.196 "is_configured": true, 00:43:13.196 "data_offset": 256, 00:43:13.196 "data_size": 7936 00:43:13.196 } 00:43:13.196 ] 00:43:13.196 }' 00:43:13.196 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:13.196 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:13.196 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:13.196 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:13.196 23:24:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:43:14.131 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:14.131 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:14.131 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:14.131 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:14.131 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:14.132 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:14.132 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:14.132 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.132 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:14.132 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:14.390 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.390 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:14.390 "name": "raid_bdev1", 00:43:14.390 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:14.390 "strip_size_kb": 0, 00:43:14.390 "state": "online", 00:43:14.390 "raid_level": "raid1", 00:43:14.390 "superblock": true, 00:43:14.390 "num_base_bdevs": 2, 00:43:14.390 "num_base_bdevs_discovered": 2, 00:43:14.390 "num_base_bdevs_operational": 2, 00:43:14.390 "process": { 00:43:14.390 "type": "rebuild", 00:43:14.390 "target": "spare", 00:43:14.390 "progress": { 00:43:14.390 "blocks": 5632, 00:43:14.390 "percent": 70 00:43:14.390 } 00:43:14.390 }, 00:43:14.390 "base_bdevs_list": [ 00:43:14.390 { 00:43:14.390 "name": "spare", 00:43:14.390 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:14.390 "is_configured": true, 00:43:14.390 "data_offset": 256, 00:43:14.390 "data_size": 7936 00:43:14.390 }, 00:43:14.390 { 00:43:14.390 "name": "BaseBdev2", 00:43:14.390 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:14.390 "is_configured": true, 00:43:14.390 "data_offset": 256, 00:43:14.390 "data_size": 7936 00:43:14.390 } 00:43:14.390 ] 00:43:14.390 }' 00:43:14.390 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:14.390 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:14.390 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:14.390 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:14.390 23:24:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:43:15.325 [2024-12-09 23:24:55.611711] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:43:15.325 [2024-12-09 23:24:55.611805] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:43:15.325 [2024-12-09 23:24:55.611990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:15.325 "name": "raid_bdev1", 00:43:15.325 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:15.325 "strip_size_kb": 0, 00:43:15.325 "state": "online", 00:43:15.325 "raid_level": "raid1", 00:43:15.325 "superblock": true, 00:43:15.325 "num_base_bdevs": 2, 00:43:15.325 "num_base_bdevs_discovered": 2, 00:43:15.325 "num_base_bdevs_operational": 2, 00:43:15.325 "base_bdevs_list": [ 00:43:15.325 { 00:43:15.325 "name": "spare", 00:43:15.325 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:15.325 "is_configured": true, 00:43:15.325 "data_offset": 256, 00:43:15.325 "data_size": 7936 00:43:15.325 }, 00:43:15.325 { 00:43:15.325 "name": "BaseBdev2", 00:43:15.325 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:15.325 "is_configured": true, 00:43:15.325 "data_offset": 256, 00:43:15.325 "data_size": 7936 00:43:15.325 } 00:43:15.325 ] 00:43:15.325 }' 00:43:15.325 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:15.583 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:43:15.583 23:24:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:15.583 "name": "raid_bdev1", 00:43:15.583 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:15.583 "strip_size_kb": 0, 00:43:15.583 "state": "online", 00:43:15.583 "raid_level": "raid1", 00:43:15.583 "superblock": true, 00:43:15.583 "num_base_bdevs": 2, 00:43:15.583 "num_base_bdevs_discovered": 2, 00:43:15.583 "num_base_bdevs_operational": 2, 00:43:15.583 "base_bdevs_list": [ 00:43:15.583 { 00:43:15.583 "name": "spare", 00:43:15.583 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:15.583 "is_configured": true, 00:43:15.583 "data_offset": 256, 00:43:15.583 "data_size": 7936 00:43:15.583 }, 00:43:15.583 { 00:43:15.583 "name": "BaseBdev2", 00:43:15.583 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:15.583 "is_configured": true, 00:43:15.583 "data_offset": 256, 00:43:15.583 "data_size": 7936 00:43:15.583 } 00:43:15.583 ] 00:43:15.583 }' 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:15.583 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:15.584 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:15.584 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:15.584 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:15.584 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.584 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:15.584 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:15.584 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.584 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:15.584 "name": "raid_bdev1", 00:43:15.584 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:15.584 "strip_size_kb": 0, 00:43:15.584 "state": "online", 00:43:15.584 "raid_level": "raid1", 00:43:15.584 "superblock": true, 00:43:15.584 "num_base_bdevs": 2, 00:43:15.584 "num_base_bdevs_discovered": 2, 00:43:15.584 "num_base_bdevs_operational": 2, 00:43:15.584 "base_bdevs_list": [ 00:43:15.584 { 00:43:15.584 "name": "spare", 00:43:15.584 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:15.584 "is_configured": true, 00:43:15.584 "data_offset": 256, 00:43:15.584 "data_size": 7936 00:43:15.584 }, 00:43:15.584 { 00:43:15.584 "name": "BaseBdev2", 00:43:15.584 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:15.584 "is_configured": true, 00:43:15.584 "data_offset": 256, 00:43:15.584 "data_size": 7936 00:43:15.584 } 00:43:15.584 ] 00:43:15.584 }' 00:43:15.584 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:15.584 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.150 [2024-12-09 23:24:56.555228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:16.150 [2024-12-09 23:24:56.555282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:16.150 [2024-12-09 23:24:56.555439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:16.150 [2024-12-09 23:24:56.555557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:16.150 [2024-12-09 23:24:56.555582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.150 [2024-12-09 23:24:56.615137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:16.150 [2024-12-09 23:24:56.615235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:16.150 [2024-12-09 23:24:56.615277] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:43:16.150 [2024-12-09 23:24:56.615299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:16.150 [2024-12-09 23:24:56.617642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:16.150 [2024-12-09 23:24:56.617692] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:16.150 [2024-12-09 23:24:56.617805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:43:16.150 [2024-12-09 23:24:56.617895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:16.150 [2024-12-09 23:24:56.618075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:16.150 spare 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.150 [2024-12-09 23:24:56.718058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:43:16.150 [2024-12-09 23:24:56.718114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:16.150 [2024-12-09 23:24:56.718311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:43:16.150 [2024-12-09 23:24:56.718488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:43:16.150 [2024-12-09 23:24:56.718521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:43:16.150 [2024-12-09 23:24:56.718715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:16.150 "name": "raid_bdev1", 00:43:16.150 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:16.150 "strip_size_kb": 0, 00:43:16.150 "state": "online", 00:43:16.150 "raid_level": "raid1", 00:43:16.150 "superblock": true, 00:43:16.150 "num_base_bdevs": 2, 00:43:16.150 "num_base_bdevs_discovered": 2, 00:43:16.150 "num_base_bdevs_operational": 2, 00:43:16.150 "base_bdevs_list": [ 00:43:16.150 { 00:43:16.150 "name": "spare", 00:43:16.150 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:16.150 "is_configured": true, 00:43:16.150 "data_offset": 256, 00:43:16.150 "data_size": 7936 00:43:16.150 }, 00:43:16.150 { 00:43:16.150 "name": "BaseBdev2", 00:43:16.150 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:16.150 "is_configured": true, 00:43:16.150 "data_offset": 256, 00:43:16.150 "data_size": 7936 00:43:16.150 } 00:43:16.150 ] 00:43:16.150 }' 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:16.150 23:24:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:16.717 "name": "raid_bdev1", 00:43:16.717 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:16.717 "strip_size_kb": 0, 00:43:16.717 "state": "online", 00:43:16.717 "raid_level": "raid1", 00:43:16.717 "superblock": true, 00:43:16.717 "num_base_bdevs": 2, 00:43:16.717 "num_base_bdevs_discovered": 2, 00:43:16.717 "num_base_bdevs_operational": 2, 00:43:16.717 "base_bdevs_list": [ 00:43:16.717 { 00:43:16.717 "name": "spare", 00:43:16.717 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:16.717 "is_configured": true, 00:43:16.717 "data_offset": 256, 00:43:16.717 "data_size": 7936 00:43:16.717 }, 00:43:16.717 { 00:43:16.717 "name": "BaseBdev2", 00:43:16.717 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:16.717 "is_configured": true, 00:43:16.717 "data_offset": 256, 00:43:16.717 "data_size": 7936 00:43:16.717 } 00:43:16.717 ] 00:43:16.717 }' 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.717 [2024-12-09 23:24:57.262696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:16.717 "name": "raid_bdev1", 00:43:16.717 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:16.717 "strip_size_kb": 0, 00:43:16.717 "state": "online", 00:43:16.717 "raid_level": "raid1", 00:43:16.717 "superblock": true, 00:43:16.717 "num_base_bdevs": 2, 00:43:16.717 "num_base_bdevs_discovered": 1, 00:43:16.717 "num_base_bdevs_operational": 1, 00:43:16.717 "base_bdevs_list": [ 00:43:16.717 { 00:43:16.717 "name": null, 00:43:16.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:16.717 "is_configured": false, 00:43:16.717 "data_offset": 0, 00:43:16.717 "data_size": 7936 00:43:16.717 }, 00:43:16.717 { 00:43:16.717 "name": "BaseBdev2", 00:43:16.717 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:16.717 "is_configured": true, 00:43:16.717 "data_offset": 256, 00:43:16.717 "data_size": 7936 00:43:16.717 } 00:43:16.717 ] 00:43:16.717 }' 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:16.717 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:17.285 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:17.285 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.285 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:17.285 [2024-12-09 23:24:57.694457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:17.285 [2024-12-09 23:24:57.694725] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:43:17.285 [2024-12-09 23:24:57.694762] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:43:17.285 [2024-12-09 23:24:57.694828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:17.285 [2024-12-09 23:24:57.711313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:43:17.285 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.285 23:24:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:43:17.285 [2024-12-09 23:24:57.713593] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.218 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:18.218 "name": "raid_bdev1", 00:43:18.218 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:18.218 "strip_size_kb": 0, 00:43:18.218 "state": "online", 00:43:18.218 "raid_level": "raid1", 00:43:18.218 "superblock": true, 00:43:18.218 "num_base_bdevs": 2, 00:43:18.218 "num_base_bdevs_discovered": 2, 00:43:18.218 "num_base_bdevs_operational": 2, 00:43:18.218 "process": { 00:43:18.218 "type": "rebuild", 00:43:18.218 "target": "spare", 00:43:18.218 "progress": { 00:43:18.218 "blocks": 2560, 00:43:18.218 "percent": 32 00:43:18.218 } 00:43:18.218 }, 00:43:18.218 "base_bdevs_list": [ 00:43:18.218 { 00:43:18.218 "name": "spare", 00:43:18.218 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:18.218 "is_configured": true, 00:43:18.218 "data_offset": 256, 00:43:18.218 "data_size": 7936 00:43:18.218 }, 00:43:18.218 { 00:43:18.218 "name": "BaseBdev2", 00:43:18.218 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:18.218 "is_configured": true, 00:43:18.218 "data_offset": 256, 00:43:18.219 "data_size": 7936 00:43:18.219 } 00:43:18.219 ] 00:43:18.219 }' 00:43:18.219 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:18.219 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:18.219 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:18.476 [2024-12-09 23:24:58.865588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:18.476 [2024-12-09 23:24:58.919319] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:18.476 [2024-12-09 23:24:58.919424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:18.476 [2024-12-09 23:24:58.919458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:18.476 [2024-12-09 23:24:58.919479] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.476 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:18.477 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:18.477 23:24:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.477 23:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:18.477 "name": "raid_bdev1", 00:43:18.477 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:18.477 "strip_size_kb": 0, 00:43:18.477 "state": "online", 00:43:18.477 "raid_level": "raid1", 00:43:18.477 "superblock": true, 00:43:18.477 "num_base_bdevs": 2, 00:43:18.477 "num_base_bdevs_discovered": 1, 00:43:18.477 "num_base_bdevs_operational": 1, 00:43:18.477 "base_bdevs_list": [ 00:43:18.477 { 00:43:18.477 "name": null, 00:43:18.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:18.477 "is_configured": false, 00:43:18.477 "data_offset": 0, 00:43:18.477 "data_size": 7936 00:43:18.477 }, 00:43:18.477 { 00:43:18.477 "name": "BaseBdev2", 00:43:18.477 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:18.477 "is_configured": true, 00:43:18.477 "data_offset": 256, 00:43:18.477 "data_size": 7936 00:43:18.477 } 00:43:18.477 ] 00:43:18.477 }' 00:43:18.477 23:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:18.477 23:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:18.735 23:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:18.735 23:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.735 23:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:18.735 [2024-12-09 23:24:59.339216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:18.735 [2024-12-09 23:24:59.339320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:18.735 [2024-12-09 23:24:59.339366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:43:18.735 [2024-12-09 23:24:59.339407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:18.735 [2024-12-09 23:24:59.339668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:18.735 [2024-12-09 23:24:59.339708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:18.735 [2024-12-09 23:24:59.339805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:43:18.735 [2024-12-09 23:24:59.339845] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:43:18.735 [2024-12-09 23:24:59.339881] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:43:18.735 [2024-12-09 23:24:59.339930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:18.735 [2024-12-09 23:24:59.356188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:43:18.735 spare 00:43:18.735 23:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.735 23:24:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:43:18.735 [2024-12-09 23:24:59.358538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:20.108 "name": "raid_bdev1", 00:43:20.108 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:20.108 "strip_size_kb": 0, 00:43:20.108 "state": "online", 00:43:20.108 "raid_level": "raid1", 00:43:20.108 "superblock": true, 00:43:20.108 "num_base_bdevs": 2, 00:43:20.108 "num_base_bdevs_discovered": 2, 00:43:20.108 "num_base_bdevs_operational": 2, 00:43:20.108 "process": { 00:43:20.108 "type": "rebuild", 00:43:20.108 "target": "spare", 00:43:20.108 "progress": { 00:43:20.108 "blocks": 2560, 00:43:20.108 "percent": 32 00:43:20.108 } 00:43:20.108 }, 00:43:20.108 "base_bdevs_list": [ 00:43:20.108 { 00:43:20.108 "name": "spare", 00:43:20.108 "uuid": "e91410fb-bc41-5bbd-841e-8f0171b73e07", 00:43:20.108 "is_configured": true, 00:43:20.108 "data_offset": 256, 00:43:20.108 "data_size": 7936 00:43:20.108 }, 00:43:20.108 { 00:43:20.108 "name": "BaseBdev2", 00:43:20.108 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:20.108 "is_configured": true, 00:43:20.108 "data_offset": 256, 00:43:20.108 "data_size": 7936 00:43:20.108 } 00:43:20.108 ] 00:43:20.108 }' 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:20.108 [2024-12-09 23:25:00.486806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:20.108 [2024-12-09 23:25:00.564371] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:20.108 [2024-12-09 23:25:00.564484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:20.108 [2024-12-09 23:25:00.564518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:20.108 [2024-12-09 23:25:00.564533] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:20.108 "name": "raid_bdev1", 00:43:20.108 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:20.108 "strip_size_kb": 0, 00:43:20.108 "state": "online", 00:43:20.108 "raid_level": "raid1", 00:43:20.108 "superblock": true, 00:43:20.108 "num_base_bdevs": 2, 00:43:20.108 "num_base_bdevs_discovered": 1, 00:43:20.108 "num_base_bdevs_operational": 1, 00:43:20.108 "base_bdevs_list": [ 00:43:20.108 { 00:43:20.108 "name": null, 00:43:20.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:20.108 "is_configured": false, 00:43:20.108 "data_offset": 0, 00:43:20.108 "data_size": 7936 00:43:20.108 }, 00:43:20.108 { 00:43:20.108 "name": "BaseBdev2", 00:43:20.108 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:20.108 "is_configured": true, 00:43:20.108 "data_offset": 256, 00:43:20.108 "data_size": 7936 00:43:20.108 } 00:43:20.108 ] 00:43:20.108 }' 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:20.108 23:25:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:20.676 "name": "raid_bdev1", 00:43:20.676 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:20.676 "strip_size_kb": 0, 00:43:20.676 "state": "online", 00:43:20.676 "raid_level": "raid1", 00:43:20.676 "superblock": true, 00:43:20.676 "num_base_bdevs": 2, 00:43:20.676 "num_base_bdevs_discovered": 1, 00:43:20.676 "num_base_bdevs_operational": 1, 00:43:20.676 "base_bdevs_list": [ 00:43:20.676 { 00:43:20.676 "name": null, 00:43:20.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:20.676 "is_configured": false, 00:43:20.676 "data_offset": 0, 00:43:20.676 "data_size": 7936 00:43:20.676 }, 00:43:20.676 { 00:43:20.676 "name": "BaseBdev2", 00:43:20.676 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:20.676 "is_configured": true, 00:43:20.676 "data_offset": 256, 00:43:20.676 "data_size": 7936 00:43:20.676 } 00:43:20.676 ] 00:43:20.676 }' 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:20.676 [2024-12-09 23:25:01.159699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:43:20.676 [2024-12-09 23:25:01.159776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:20.676 [2024-12-09 23:25:01.159817] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:43:20.676 [2024-12-09 23:25:01.159839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:20.676 [2024-12-09 23:25:01.160059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:20.676 [2024-12-09 23:25:01.160090] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:43:20.676 [2024-12-09 23:25:01.160170] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:43:20.676 [2024-12-09 23:25:01.160196] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:43:20.676 [2024-12-09 23:25:01.160216] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:43:20.676 [2024-12-09 23:25:01.160234] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:43:20.676 BaseBdev1 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.676 23:25:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:43:21.612 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:21.613 "name": "raid_bdev1", 00:43:21.613 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:21.613 "strip_size_kb": 0, 00:43:21.613 "state": "online", 00:43:21.613 "raid_level": "raid1", 00:43:21.613 "superblock": true, 00:43:21.613 "num_base_bdevs": 2, 00:43:21.613 "num_base_bdevs_discovered": 1, 00:43:21.613 "num_base_bdevs_operational": 1, 00:43:21.613 "base_bdevs_list": [ 00:43:21.613 { 00:43:21.613 "name": null, 00:43:21.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:21.613 "is_configured": false, 00:43:21.613 "data_offset": 0, 00:43:21.613 "data_size": 7936 00:43:21.613 }, 00:43:21.613 { 00:43:21.613 "name": "BaseBdev2", 00:43:21.613 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:21.613 "is_configured": true, 00:43:21.613 "data_offset": 256, 00:43:21.613 "data_size": 7936 00:43:21.613 } 00:43:21.613 ] 00:43:21.613 }' 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:21.613 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:22.180 "name": "raid_bdev1", 00:43:22.180 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:22.180 "strip_size_kb": 0, 00:43:22.180 "state": "online", 00:43:22.180 "raid_level": "raid1", 00:43:22.180 "superblock": true, 00:43:22.180 "num_base_bdevs": 2, 00:43:22.180 "num_base_bdevs_discovered": 1, 00:43:22.180 "num_base_bdevs_operational": 1, 00:43:22.180 "base_bdevs_list": [ 00:43:22.180 { 00:43:22.180 "name": null, 00:43:22.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:22.180 "is_configured": false, 00:43:22.180 "data_offset": 0, 00:43:22.180 "data_size": 7936 00:43:22.180 }, 00:43:22.180 { 00:43:22.180 "name": "BaseBdev2", 00:43:22.180 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:22.180 "is_configured": true, 00:43:22.180 "data_offset": 256, 00:43:22.180 "data_size": 7936 00:43:22.180 } 00:43:22.180 ] 00:43:22.180 }' 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:22.180 [2024-12-09 23:25:02.705901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:22.180 [2024-12-09 23:25:02.706119] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:43:22.180 [2024-12-09 23:25:02.706160] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:43:22.180 request: 00:43:22.180 { 00:43:22.180 "base_bdev": "BaseBdev1", 00:43:22.180 "raid_bdev": "raid_bdev1", 00:43:22.180 "method": "bdev_raid_add_base_bdev", 00:43:22.180 "req_id": 1 00:43:22.180 } 00:43:22.180 Got JSON-RPC error response 00:43:22.180 response: 00:43:22.180 { 00:43:22.180 "code": -22, 00:43:22.180 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:43:22.180 } 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:22.180 23:25:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:23.117 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.376 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:23.376 "name": "raid_bdev1", 00:43:23.376 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:23.376 "strip_size_kb": 0, 00:43:23.376 "state": "online", 00:43:23.376 "raid_level": "raid1", 00:43:23.376 "superblock": true, 00:43:23.376 "num_base_bdevs": 2, 00:43:23.376 "num_base_bdevs_discovered": 1, 00:43:23.376 "num_base_bdevs_operational": 1, 00:43:23.376 "base_bdevs_list": [ 00:43:23.376 { 00:43:23.376 "name": null, 00:43:23.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:23.376 "is_configured": false, 00:43:23.376 "data_offset": 0, 00:43:23.376 "data_size": 7936 00:43:23.376 }, 00:43:23.376 { 00:43:23.376 "name": "BaseBdev2", 00:43:23.376 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:23.376 "is_configured": true, 00:43:23.376 "data_offset": 256, 00:43:23.376 "data_size": 7936 00:43:23.376 } 00:43:23.376 ] 00:43:23.376 }' 00:43:23.376 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:23.376 23:25:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:23.635 "name": "raid_bdev1", 00:43:23.635 "uuid": "f4feaa7e-de1f-41bb-89a9-b3652ed7bdf5", 00:43:23.635 "strip_size_kb": 0, 00:43:23.635 "state": "online", 00:43:23.635 "raid_level": "raid1", 00:43:23.635 "superblock": true, 00:43:23.635 "num_base_bdevs": 2, 00:43:23.635 "num_base_bdevs_discovered": 1, 00:43:23.635 "num_base_bdevs_operational": 1, 00:43:23.635 "base_bdevs_list": [ 00:43:23.635 { 00:43:23.635 "name": null, 00:43:23.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:23.635 "is_configured": false, 00:43:23.635 "data_offset": 0, 00:43:23.635 "data_size": 7936 00:43:23.635 }, 00:43:23.635 { 00:43:23.635 "name": "BaseBdev2", 00:43:23.635 "uuid": "fc51fded-9ab4-5d0e-b88b-a5fb695ab0ae", 00:43:23.635 "is_configured": true, 00:43:23.635 "data_offset": 256, 00:43:23.635 "data_size": 7936 00:43:23.635 } 00:43:23.635 ] 00:43:23.635 }' 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:23.635 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88899 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88899 ']' 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88899 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88899 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:23.894 killing process with pid 88899 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88899' 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88899 00:43:23.894 Received shutdown signal, test time was about 60.000000 seconds 00:43:23.894 00:43:23.894 Latency(us) 00:43:23.894 [2024-12-09T23:25:04.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:23.894 [2024-12-09T23:25:04.530Z] =================================================================================================================== 00:43:23.894 [2024-12-09T23:25:04.530Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:23.894 [2024-12-09 23:25:04.338238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:23.894 23:25:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88899 00:43:23.894 [2024-12-09 23:25:04.338372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:23.894 [2024-12-09 23:25:04.338437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:23.894 [2024-12-09 23:25:04.338453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:43:24.152 [2024-12-09 23:25:04.643774] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:25.526 23:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:43:25.526 00:43:25.526 real 0m17.214s 00:43:25.526 user 0m22.313s 00:43:25.526 sys 0m1.764s 00:43:25.526 23:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:25.526 23:25:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:25.526 ************************************ 00:43:25.526 END TEST raid_rebuild_test_sb_md_interleaved 00:43:25.526 ************************************ 00:43:25.526 23:25:05 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:43:25.526 23:25:05 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:43:25.526 23:25:05 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88899 ']' 00:43:25.526 23:25:05 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88899 00:43:25.526 23:25:05 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:43:25.526 00:43:25.526 real 12m4.998s 00:43:25.526 user 16m12.376s 00:43:25.526 sys 2m5.868s 00:43:25.526 23:25:05 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:25.526 23:25:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:25.526 ************************************ 00:43:25.526 END TEST bdev_raid 00:43:25.526 ************************************ 00:43:25.526 23:25:05 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:43:25.526 23:25:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:25.526 23:25:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:25.526 23:25:05 -- common/autotest_common.sh@10 -- # set +x 00:43:25.526 ************************************ 00:43:25.527 START TEST spdkcli_raid 00:43:25.527 ************************************ 00:43:25.527 23:25:05 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:43:25.527 * Looking for test storage... 00:43:25.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:43:25.527 23:25:06 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:25.527 23:25:06 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:43:25.527 23:25:06 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:25.786 23:25:06 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:25.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.786 --rc genhtml_branch_coverage=1 00:43:25.786 --rc genhtml_function_coverage=1 00:43:25.786 --rc genhtml_legend=1 00:43:25.786 --rc geninfo_all_blocks=1 00:43:25.786 --rc geninfo_unexecuted_blocks=1 00:43:25.786 00:43:25.786 ' 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:25.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.786 --rc genhtml_branch_coverage=1 00:43:25.786 --rc genhtml_function_coverage=1 00:43:25.786 --rc genhtml_legend=1 00:43:25.786 --rc geninfo_all_blocks=1 00:43:25.786 --rc geninfo_unexecuted_blocks=1 00:43:25.786 00:43:25.786 ' 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:25.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.786 --rc genhtml_branch_coverage=1 00:43:25.786 --rc genhtml_function_coverage=1 00:43:25.786 --rc genhtml_legend=1 00:43:25.786 --rc geninfo_all_blocks=1 00:43:25.786 --rc geninfo_unexecuted_blocks=1 00:43:25.786 00:43:25.786 ' 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:25.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.786 --rc genhtml_branch_coverage=1 00:43:25.786 --rc genhtml_function_coverage=1 00:43:25.786 --rc genhtml_legend=1 00:43:25.786 --rc geninfo_all_blocks=1 00:43:25.786 --rc geninfo_unexecuted_blocks=1 00:43:25.786 00:43:25.786 ' 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:43:25.786 23:25:06 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89575 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:43:25.786 23:25:06 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89575 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89575 ']' 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:25.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:25.786 23:25:06 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:25.787 23:25:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:25.787 [2024-12-09 23:25:06.359367] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:25.787 [2024-12-09 23:25:06.360753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89575 ] 00:43:26.044 [2024-12-09 23:25:06.557529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:26.302 [2024-12-09 23:25:06.691497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:26.302 [2024-12-09 23:25:06.691539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:27.237 23:25:07 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:27.237 23:25:07 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:43:27.237 23:25:07 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:43:27.237 23:25:07 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:27.237 23:25:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:27.237 23:25:07 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:43:27.237 23:25:07 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:27.237 23:25:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:27.237 23:25:07 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:43:27.237 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:43:27.237 ' 00:43:28.612 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:43:28.612 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:43:28.929 23:25:09 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:43:28.929 23:25:09 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:28.929 23:25:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:28.929 23:25:09 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:43:28.929 23:25:09 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:28.929 23:25:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:28.929 23:25:09 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:43:28.929 ' 00:43:29.862 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:43:30.121 23:25:10 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:43:30.121 23:25:10 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:30.121 23:25:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:30.121 23:25:10 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:43:30.121 23:25:10 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:30.121 23:25:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:30.121 23:25:10 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:43:30.121 23:25:10 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:43:30.687 23:25:11 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:43:30.687 23:25:11 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:43:30.687 23:25:11 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:43:30.687 23:25:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:30.687 23:25:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:30.687 23:25:11 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:43:30.687 23:25:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:30.687 23:25:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:30.687 23:25:11 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:43:30.687 ' 00:43:31.620 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:43:31.878 23:25:12 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:43:31.878 23:25:12 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:31.878 23:25:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:31.878 23:25:12 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:43:31.878 23:25:12 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:31.878 23:25:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:31.878 23:25:12 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:43:31.878 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:43:31.878 ' 00:43:33.250 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:43:33.250 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:43:33.507 23:25:13 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:43:33.507 23:25:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:33.507 23:25:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:33.507 23:25:13 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89575 00:43:33.507 23:25:13 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89575 ']' 00:43:33.507 23:25:13 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89575 00:43:33.507 23:25:13 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:43:33.507 23:25:13 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:33.507 23:25:13 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89575 00:43:33.507 killing process with pid 89575 00:43:33.507 23:25:14 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:33.507 23:25:14 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:33.507 23:25:14 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89575' 00:43:33.507 23:25:14 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89575 00:43:33.507 23:25:14 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89575 00:43:36.105 23:25:16 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:43:36.105 23:25:16 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89575 ']' 00:43:36.105 23:25:16 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89575 00:43:36.105 Process with pid 89575 is not found 00:43:36.105 23:25:16 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89575 ']' 00:43:36.105 23:25:16 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89575 00:43:36.105 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89575) - No such process 00:43:36.105 23:25:16 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89575 is not found' 00:43:36.105 23:25:16 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:43:36.105 23:25:16 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:36.105 23:25:16 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:36.105 23:25:16 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:36.105 00:43:36.105 real 0m10.464s 00:43:36.105 user 0m21.571s 00:43:36.105 sys 0m1.214s 00:43:36.105 23:25:16 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:36.105 23:25:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:43:36.105 ************************************ 00:43:36.105 END TEST spdkcli_raid 00:43:36.105 ************************************ 00:43:36.105 23:25:16 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:43:36.105 23:25:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:36.105 23:25:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:36.105 23:25:16 -- common/autotest_common.sh@10 -- # set +x 00:43:36.105 ************************************ 00:43:36.105 START TEST blockdev_raid5f 00:43:36.105 ************************************ 00:43:36.105 23:25:16 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:43:36.105 * Looking for test storage... 00:43:36.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:43:36.105 23:25:16 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:36.105 23:25:16 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:43:36.105 23:25:16 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:36.105 23:25:16 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:36.105 23:25:16 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:36.105 23:25:16 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:36.105 23:25:16 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:36.105 23:25:16 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:43:36.105 23:25:16 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:43:36.105 23:25:16 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:43:36.105 23:25:16 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:43:36.105 23:25:16 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:43:36.105 23:25:16 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:43:36.105 23:25:16 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:36.365 23:25:16 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:36.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.365 --rc genhtml_branch_coverage=1 00:43:36.365 --rc genhtml_function_coverage=1 00:43:36.365 --rc genhtml_legend=1 00:43:36.365 --rc geninfo_all_blocks=1 00:43:36.365 --rc geninfo_unexecuted_blocks=1 00:43:36.365 00:43:36.365 ' 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:36.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.365 --rc genhtml_branch_coverage=1 00:43:36.365 --rc genhtml_function_coverage=1 00:43:36.365 --rc genhtml_legend=1 00:43:36.365 --rc geninfo_all_blocks=1 00:43:36.365 --rc geninfo_unexecuted_blocks=1 00:43:36.365 00:43:36.365 ' 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:36.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.365 --rc genhtml_branch_coverage=1 00:43:36.365 --rc genhtml_function_coverage=1 00:43:36.365 --rc genhtml_legend=1 00:43:36.365 --rc geninfo_all_blocks=1 00:43:36.365 --rc geninfo_unexecuted_blocks=1 00:43:36.365 00:43:36.365 ' 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:36.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.365 --rc genhtml_branch_coverage=1 00:43:36.365 --rc genhtml_function_coverage=1 00:43:36.365 --rc genhtml_legend=1 00:43:36.365 --rc geninfo_all_blocks=1 00:43:36.365 --rc geninfo_unexecuted_blocks=1 00:43:36.365 00:43:36.365 ' 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89864 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:36.365 23:25:16 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89864 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89864 ']' 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:36.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:36.365 23:25:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:36.365 [2024-12-09 23:25:16.895056] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:36.365 [2024-12-09 23:25:16.895359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89864 ] 00:43:36.624 [2024-12-09 23:25:17.076604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:36.624 [2024-12-09 23:25:17.195319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:37.557 23:25:18 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:37.557 23:25:18 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:43:37.557 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:43:37.557 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:43:37.557 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:43:37.557 23:25:18 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.557 23:25:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:37.557 Malloc0 00:43:37.557 Malloc1 00:43:37.816 Malloc2 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:43:37.816 23:25:18 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:43:37.816 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "828648a4-7047-487d-9af6-f6a41e3b8ffb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "828648a4-7047-487d-9af6-f6a41e3b8ffb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "828648a4-7047-487d-9af6-f6a41e3b8ffb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9bf7a429-1cca-495d-b85d-6bf9ca263589",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "b28195e9-cd96-409c-881e-6168fb61dfa2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "75db3ecb-5845-4f70-b5f9-1b7462356aa7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:43:38.074 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:43:38.074 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:43:38.074 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:43:38.074 23:25:18 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89864 00:43:38.074 23:25:18 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89864 ']' 00:43:38.074 23:25:18 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89864 00:43:38.074 23:25:18 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:43:38.074 23:25:18 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:38.074 23:25:18 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89864 00:43:38.074 killing process with pid 89864 00:43:38.074 23:25:18 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:38.074 23:25:18 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:38.074 23:25:18 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89864' 00:43:38.074 23:25:18 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89864 00:43:38.074 23:25:18 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89864 00:43:40.604 23:25:21 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:40.604 23:25:21 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:43:40.604 23:25:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:43:40.604 23:25:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:40.604 23:25:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:40.604 ************************************ 00:43:40.604 START TEST bdev_hello_world 00:43:40.604 ************************************ 00:43:40.604 23:25:21 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:43:40.862 [2024-12-09 23:25:21.297239] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:40.862 [2024-12-09 23:25:21.297361] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89931 ] 00:43:40.862 [2024-12-09 23:25:21.478668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:41.120 [2024-12-09 23:25:21.595874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:41.714 [2024-12-09 23:25:22.125474] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:43:41.714 [2024-12-09 23:25:22.125526] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:43:41.714 [2024-12-09 23:25:22.125547] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:43:41.714 [2024-12-09 23:25:22.126054] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:43:41.714 [2024-12-09 23:25:22.126183] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:43:41.714 [2024-12-09 23:25:22.126202] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:43:41.714 [2024-12-09 23:25:22.126254] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:43:41.714 00:43:41.714 [2024-12-09 23:25:22.126275] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:43:43.089 00:43:43.089 real 0m2.343s 00:43:43.089 user 0m1.944s 00:43:43.089 sys 0m0.276s 00:43:43.089 23:25:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:43.089 23:25:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:43.089 ************************************ 00:43:43.089 END TEST bdev_hello_world 00:43:43.089 ************************************ 00:43:43.089 23:25:23 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:43:43.089 23:25:23 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:43.089 23:25:23 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:43.089 23:25:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:43.089 ************************************ 00:43:43.089 START TEST bdev_bounds 00:43:43.089 ************************************ 00:43:43.089 23:25:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89973 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:43:43.090 Process bdevio pid: 89973 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89973' 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89973 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89973 ']' 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:43.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:43.090 23:25:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:43.090 [2024-12-09 23:25:23.715808] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:43.090 [2024-12-09 23:25:23.715940] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89973 ] 00:43:43.348 [2024-12-09 23:25:23.896334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:43.606 [2024-12-09 23:25:24.019121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:43.606 [2024-12-09 23:25:24.019214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:43.606 [2024-12-09 23:25:24.019244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:44.172 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:44.172 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:43:44.172 23:25:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:43:44.172 I/O targets: 00:43:44.172 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:43:44.172 00:43:44.172 00:43:44.172 CUnit - A unit testing framework for C - Version 2.1-3 00:43:44.172 http://cunit.sourceforge.net/ 00:43:44.172 00:43:44.172 00:43:44.172 Suite: bdevio tests on: raid5f 00:43:44.172 Test: blockdev write read block ...passed 00:43:44.172 Test: blockdev write zeroes read block ...passed 00:43:44.172 Test: blockdev write zeroes read no split ...passed 00:43:44.172 Test: blockdev write zeroes read split ...passed 00:43:44.429 Test: blockdev write zeroes read split partial ...passed 00:43:44.429 Test: blockdev reset ...passed 00:43:44.429 Test: blockdev write read 8 blocks ...passed 00:43:44.429 Test: blockdev write read size > 128k ...passed 00:43:44.429 Test: blockdev write read invalid size ...passed 00:43:44.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:44.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:44.429 Test: blockdev write read max offset ...passed 00:43:44.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:44.429 Test: blockdev writev readv 8 blocks ...passed 00:43:44.429 Test: blockdev writev readv 30 x 1block ...passed 00:43:44.429 Test: blockdev writev readv block ...passed 00:43:44.429 Test: blockdev writev readv size > 128k ...passed 00:43:44.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:44.429 Test: blockdev comparev and writev ...passed 00:43:44.429 Test: blockdev nvme passthru rw ...passed 00:43:44.429 Test: blockdev nvme passthru vendor specific ...passed 00:43:44.429 Test: blockdev nvme admin passthru ...passed 00:43:44.429 Test: blockdev copy ...passed 00:43:44.429 00:43:44.429 Run Summary: Type Total Ran Passed Failed Inactive 00:43:44.429 suites 1 1 n/a 0 0 00:43:44.429 tests 23 23 23 0 0 00:43:44.429 asserts 130 130 130 0 n/a 00:43:44.429 00:43:44.429 Elapsed time = 0.587 seconds 00:43:44.429 0 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89973 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89973 ']' 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89973 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89973 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89973' 00:43:44.429 killing process with pid 89973 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89973 00:43:44.429 23:25:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89973 00:43:45.801 23:25:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:43:45.801 00:43:45.801 real 0m2.800s 00:43:45.801 user 0m6.936s 00:43:45.801 sys 0m0.408s 00:43:45.801 23:25:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:45.801 23:25:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:45.801 ************************************ 00:43:45.801 END TEST bdev_bounds 00:43:45.801 ************************************ 00:43:46.060 23:25:26 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:43:46.060 23:25:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:43:46.060 23:25:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:46.060 23:25:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:46.060 ************************************ 00:43:46.060 START TEST bdev_nbd 00:43:46.060 ************************************ 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90033 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90033 /var/tmp/spdk-nbd.sock 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90033 ']' 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:43:46.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:46.060 23:25:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:46.060 [2024-12-09 23:25:26.597172] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:43:46.060 [2024-12-09 23:25:26.597495] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:46.319 [2024-12-09 23:25:26.779650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:46.319 [2024-12-09 23:25:26.904697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:43:46.884 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:43:46.885 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:43:46.885 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:43:46.885 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:47.142 1+0 records in 00:43:47.142 1+0 records out 00:43:47.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410754 s, 10.0 MB/s 00:43:47.142 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:47.400 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:47.400 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:47.400 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:47.400 23:25:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:47.400 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:47.400 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:43:47.400 23:25:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:47.400 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:43:47.400 { 00:43:47.400 "nbd_device": "/dev/nbd0", 00:43:47.400 "bdev_name": "raid5f" 00:43:47.400 } 00:43:47.400 ]' 00:43:47.400 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:43:47.400 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:43:47.400 { 00:43:47.400 "nbd_device": "/dev/nbd0", 00:43:47.400 "bdev_name": "raid5f" 00:43:47.400 } 00:43:47.400 ]' 00:43:47.400 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:43:47.666 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:47.666 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:47.666 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:47.666 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:47.666 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:47.666 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:47.666 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:47.936 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:48.194 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:43:48.195 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:48.195 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:43:48.195 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:48.195 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:43:48.195 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:48.195 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:48.195 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:48.195 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:48.195 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:43:48.453 /dev/nbd0 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:48.453 1+0 records in 00:43:48.453 1+0 records out 00:43:48.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241391 s, 17.0 MB/s 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:48.453 23:25:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:43:48.718 { 00:43:48.718 "nbd_device": "/dev/nbd0", 00:43:48.718 "bdev_name": "raid5f" 00:43:48.718 } 00:43:48.718 ]' 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:43:48.718 { 00:43:48.718 "nbd_device": "/dev/nbd0", 00:43:48.718 "bdev_name": "raid5f" 00:43:48.718 } 00:43:48.718 ]' 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:43:48.718 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:43:48.719 256+0 records in 00:43:48.719 256+0 records out 00:43:48.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136444 s, 76.9 MB/s 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:43:48.719 256+0 records in 00:43:48.719 256+0 records out 00:43:48.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0404539 s, 25.9 MB/s 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:48.719 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:48.975 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:49.233 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:43:49.234 23:25:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:43:49.492 malloc_lvol_verify 00:43:49.750 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:43:49.750 bf77d11a-df5a-4174-8580-f905d5e2d74a 00:43:49.751 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:50.009 a8644203-f2fd-4308-953d-91404a2238ed 00:43:50.009 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:50.268 /dev/nbd0 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:43:50.268 mke2fs 1.47.0 (5-Feb-2023) 00:43:50.268 Discarding device blocks: 0/4096 done 00:43:50.268 Creating filesystem with 4096 1k blocks and 1024 inodes 00:43:50.268 00:43:50.268 Allocating group tables: 0/1 done 00:43:50.268 Writing inode tables: 0/1 done 00:43:50.268 Creating journal (1024 blocks): done 00:43:50.268 Writing superblocks and filesystem accounting information: 0/1 done 00:43:50.268 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:50.268 23:25:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90033 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90033 ']' 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90033 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:50.526 23:25:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90033 00:43:50.784 23:25:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:50.784 killing process with pid 90033 00:43:50.784 23:25:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:50.784 23:25:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90033' 00:43:50.784 23:25:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90033 00:43:50.784 23:25:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90033 00:43:52.158 23:25:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:43:52.158 00:43:52.158 real 0m6.276s 00:43:52.158 user 0m8.533s 00:43:52.158 sys 0m1.542s 00:43:52.158 23:25:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:52.158 23:25:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:52.158 ************************************ 00:43:52.158 END TEST bdev_nbd 00:43:52.158 ************************************ 00:43:52.417 23:25:32 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:43:52.417 23:25:32 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:43:52.417 23:25:32 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:43:52.417 23:25:32 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:43:52.417 23:25:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:52.417 23:25:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:52.417 23:25:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:52.417 ************************************ 00:43:52.417 START TEST bdev_fio 00:43:52.417 ************************************ 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:43:52.417 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:43:52.417 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:52.418 ************************************ 00:43:52.418 START TEST bdev_fio_rw_verify 00:43:52.418 ************************************ 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:43:52.418 23:25:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:52.418 23:25:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:52.418 23:25:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:52.418 23:25:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:43:52.418 23:25:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:52.418 23:25:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:43:52.676 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:52.676 fio-3.35 00:43:52.676 Starting 1 thread 00:44:04.884 00:44:04.884 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90249: Mon Dec 9 23:25:44 2024 00:44:04.884 read: IOPS=10.5k, BW=41.2MiB/s (43.2MB/s)(412MiB/10001msec) 00:44:04.884 slat (usec): min=20, max=163, avg=23.05, stdev= 2.84 00:44:04.884 clat (usec): min=10, max=488, avg=152.10, stdev=55.56 00:44:04.884 lat (usec): min=32, max=527, avg=175.14, stdev=56.15 00:44:04.884 clat percentiles (usec): 00:44:04.884 | 50.000th=[ 155], 99.000th=[ 269], 99.900th=[ 347], 99.990th=[ 396], 00:44:04.884 | 99.999th=[ 453] 00:44:04.884 write: IOPS=11.0k, BW=43.0MiB/s (45.1MB/s)(425MiB/9876msec); 0 zone resets 00:44:04.884 slat (usec): min=8, max=348, avg=18.76, stdev= 4.31 00:44:04.884 clat (usec): min=70, max=1244, avg=349.34, stdev=49.35 00:44:04.884 lat (usec): min=87, max=1323, avg=368.11, stdev=50.83 00:44:04.884 clat percentiles (usec): 00:44:04.884 | 50.000th=[ 351], 99.000th=[ 490], 99.900th=[ 635], 99.990th=[ 947], 00:44:04.884 | 99.999th=[ 1172] 00:44:04.884 bw ( KiB/s): min=35416, max=47224, per=98.89%, avg=43578.11, stdev=3017.24, samples=19 00:44:04.884 iops : min= 8854, max=11806, avg=10894.53, stdev=754.31, samples=19 00:44:04.884 lat (usec) : 20=0.01%, 50=0.01%, 100=11.05%, 250=37.32%, 500=51.19% 00:44:04.884 lat (usec) : 750=0.42%, 1000=0.01% 00:44:04.884 lat (msec) : 2=0.01% 00:44:04.884 cpu : usr=98.89%, sys=0.35%, ctx=25, majf=0, minf=8809 00:44:04.884 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:04.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.884 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.884 issued rwts: total=105484,108804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.884 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:04.884 00:44:04.884 Run status group 0 (all jobs): 00:44:04.884 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=412MiB (432MB), run=10001-10001msec 00:44:04.884 WRITE: bw=43.0MiB/s (45.1MB/s), 43.0MiB/s-43.0MiB/s (45.1MB/s-45.1MB/s), io=425MiB (446MB), run=9876-9876msec 00:44:05.452 ----------------------------------------------------- 00:44:05.452 Suppressions used: 00:44:05.452 count bytes template 00:44:05.452 1 7 /usr/src/fio/parse.c 00:44:05.452 50 4800 /usr/src/fio/iolog.c 00:44:05.452 1 8 libtcmalloc_minimal.so 00:44:05.452 1 904 libcrypto.so 00:44:05.452 ----------------------------------------------------- 00:44:05.452 00:44:05.452 00:44:05.452 real 0m12.994s 00:44:05.452 user 0m13.384s 00:44:05.452 sys 0m0.904s 00:44:05.452 23:25:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:05.452 23:25:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:44:05.452 ************************************ 00:44:05.452 END TEST bdev_fio_rw_verify 00:44:05.452 ************************************ 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:44:05.452 23:25:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "828648a4-7047-487d-9af6-f6a41e3b8ffb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "828648a4-7047-487d-9af6-f6a41e3b8ffb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "828648a4-7047-487d-9af6-f6a41e3b8ffb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9bf7a429-1cca-495d-b85d-6bf9ca263589",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "b28195e9-cd96-409c-881e-6168fb61dfa2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "75db3ecb-5845-4f70-b5f9-1b7462356aa7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:44:05.453 23:25:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:44:05.711 23:25:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:44:05.711 23:25:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:05.711 23:25:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:44:05.711 /home/vagrant/spdk_repo/spdk 00:44:05.711 23:25:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:44:05.711 23:25:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:44:05.711 00:44:05.711 real 0m13.275s 00:44:05.711 user 0m13.502s 00:44:05.711 sys 0m1.045s 00:44:05.711 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:05.711 ************************************ 00:44:05.711 END TEST bdev_fio 00:44:05.711 23:25:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:44:05.711 ************************************ 00:44:05.711 23:25:46 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:44:05.711 23:25:46 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:05.711 23:25:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:44:05.711 23:25:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:05.711 23:25:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:05.711 ************************************ 00:44:05.711 START TEST bdev_verify 00:44:05.711 ************************************ 00:44:05.711 23:25:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:05.711 [2024-12-09 23:25:46.303389] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:44:05.712 [2024-12-09 23:25:46.304194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90408 ] 00:44:05.970 [2024-12-09 23:25:46.490825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:06.251 [2024-12-09 23:25:46.637802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:06.251 [2024-12-09 23:25:46.637834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:06.843 Running I/O for 5 seconds... 00:44:08.715 12757.00 IOPS, 49.83 MiB/s [2024-12-09T23:25:50.729Z] 11802.50 IOPS, 46.10 MiB/s [2024-12-09T23:25:51.665Z] 11072.67 IOPS, 43.25 MiB/s [2024-12-09T23:25:52.602Z] 10732.00 IOPS, 41.92 MiB/s 00:44:11.966 Latency(us) 00:44:11.966 [2024-12-09T23:25:52.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:11.966 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:11.966 Verification LBA range: start 0x0 length 0x2000 00:44:11.966 raid5f : 5.02 4687.09 18.31 0.00 0.00 41160.64 201.51 38953.12 00:44:11.966 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:11.966 Verification LBA range: start 0x2000 length 0x2000 00:44:11.966 raid5f : 5.02 6108.53 23.86 0.00 0.00 31587.22 233.59 30530.83 00:44:11.966 [2024-12-09T23:25:52.602Z] =================================================================================================================== 00:44:11.966 [2024-12-09T23:25:52.602Z] Total : 10795.61 42.17 0.00 0.00 35744.67 201.51 38953.12 00:44:13.354 00:44:13.354 real 0m7.565s 00:44:13.354 user 0m13.816s 00:44:13.354 sys 0m0.406s 00:44:13.354 23:25:53 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:13.354 23:25:53 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:44:13.354 ************************************ 00:44:13.354 END TEST bdev_verify 00:44:13.354 ************************************ 00:44:13.354 23:25:53 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:13.354 23:25:53 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:44:13.354 23:25:53 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:13.354 23:25:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:13.354 ************************************ 00:44:13.354 START TEST bdev_verify_big_io 00:44:13.354 ************************************ 00:44:13.354 23:25:53 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:13.354 [2024-12-09 23:25:53.933620] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:44:13.354 [2024-12-09 23:25:53.933750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90512 ] 00:44:13.613 [2024-12-09 23:25:54.123177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:13.613 [2024-12-09 23:25:54.247362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:13.613 [2024-12-09 23:25:54.247839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:14.181 Running I/O for 5 seconds... 00:44:16.494 756.00 IOPS, 47.25 MiB/s [2024-12-09T23:25:58.066Z] 761.00 IOPS, 47.56 MiB/s [2024-12-09T23:25:59.003Z] 761.33 IOPS, 47.58 MiB/s [2024-12-09T23:25:59.961Z] 792.50 IOPS, 49.53 MiB/s [2024-12-09T23:26:00.220Z] 824.80 IOPS, 51.55 MiB/s 00:44:19.584 Latency(us) 00:44:19.584 [2024-12-09T23:26:00.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:19.584 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:19.584 Verification LBA range: start 0x0 length 0x200 00:44:19.584 raid5f : 5.17 430.00 26.87 0.00 0.00 7303727.79 157.92 325100.67 00:44:19.584 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:19.584 Verification LBA range: start 0x200 length 0x200 00:44:19.584 raid5f : 5.15 419.32 26.21 0.00 0.00 7552564.17 147.23 328469.59 00:44:19.584 [2024-12-09T23:26:00.220Z] =================================================================================================================== 00:44:19.584 [2024-12-09T23:26:00.220Z] Total : 849.32 53.08 0.00 0.00 7426384.81 147.23 328469.59 00:44:20.962 00:44:20.962 real 0m7.595s 00:44:20.962 user 0m14.017s 00:44:20.962 sys 0m0.291s 00:44:20.962 23:26:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:20.962 23:26:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:44:20.962 ************************************ 00:44:20.962 END TEST bdev_verify_big_io 00:44:20.962 ************************************ 00:44:20.962 23:26:01 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:20.962 23:26:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:44:20.962 23:26:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:20.962 23:26:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:20.962 ************************************ 00:44:20.962 START TEST bdev_write_zeroes 00:44:20.962 ************************************ 00:44:20.962 23:26:01 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:20.962 [2024-12-09 23:26:01.591876] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:44:20.962 [2024-12-09 23:26:01.591997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90605 ] 00:44:21.221 [2024-12-09 23:26:01.771971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:21.479 [2024-12-09 23:26:01.885112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:22.046 Running I/O for 1 seconds... 00:44:22.982 26199.00 IOPS, 102.34 MiB/s 00:44:22.982 Latency(us) 00:44:22.982 [2024-12-09T23:26:03.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:22.982 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:22.982 raid5f : 1.01 26180.10 102.27 0.00 0.00 4873.98 1500.22 6632.56 00:44:22.982 [2024-12-09T23:26:03.618Z] =================================================================================================================== 00:44:22.982 [2024-12-09T23:26:03.618Z] Total : 26180.10 102.27 0.00 0.00 4873.98 1500.22 6632.56 00:44:24.360 00:44:24.360 real 0m3.389s 00:44:24.360 user 0m2.994s 00:44:24.360 sys 0m0.265s 00:44:24.360 23:26:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:24.360 23:26:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:44:24.360 ************************************ 00:44:24.360 END TEST bdev_write_zeroes 00:44:24.360 ************************************ 00:44:24.360 23:26:04 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:24.360 23:26:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:44:24.360 23:26:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:24.360 23:26:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:24.360 ************************************ 00:44:24.360 START TEST bdev_json_nonenclosed 00:44:24.360 ************************************ 00:44:24.360 23:26:04 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:24.618 [2024-12-09 23:26:05.059279] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:44:24.618 [2024-12-09 23:26:05.059441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90658 ] 00:44:24.618 [2024-12-09 23:26:05.238475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:24.875 [2024-12-09 23:26:05.355965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:24.875 [2024-12-09 23:26:05.356058] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:44:24.875 [2024-12-09 23:26:05.356089] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:24.875 [2024-12-09 23:26:05.356101] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:25.133 00:44:25.133 real 0m0.650s 00:44:25.133 user 0m0.402s 00:44:25.133 sys 0m0.144s 00:44:25.133 23:26:05 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:25.133 23:26:05 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:44:25.133 ************************************ 00:44:25.133 END TEST bdev_json_nonenclosed 00:44:25.133 ************************************ 00:44:25.133 23:26:05 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:25.133 23:26:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:44:25.133 23:26:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:25.133 23:26:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:25.133 ************************************ 00:44:25.133 START TEST bdev_json_nonarray 00:44:25.133 ************************************ 00:44:25.133 23:26:05 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:25.390 [2024-12-09 23:26:05.785333] Starting SPDK v25.01-pre git sha1 c12cb8fe3 / DPDK 24.03.0 initialization... 00:44:25.390 [2024-12-09 23:26:05.785469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90689 ] 00:44:25.390 [2024-12-09 23:26:05.968050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:25.648 [2024-12-09 23:26:06.080474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:25.648 [2024-12-09 23:26:06.080574] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:44:25.648 [2024-12-09 23:26:06.080597] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:25.648 [2024-12-09 23:26:06.080617] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:25.923 00:44:25.923 real 0m0.649s 00:44:25.923 user 0m0.405s 00:44:25.923 sys 0m0.139s 00:44:25.923 23:26:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:25.923 23:26:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:44:25.923 ************************************ 00:44:25.923 END TEST bdev_json_nonarray 00:44:25.923 ************************************ 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:44:25.923 23:26:06 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:44:25.923 00:44:25.923 real 0m49.888s 00:44:25.923 user 1m7.325s 00:44:25.923 sys 0m5.682s 00:44:25.923 23:26:06 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:25.923 23:26:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:25.923 ************************************ 00:44:25.923 END TEST blockdev_raid5f 00:44:25.923 ************************************ 00:44:25.923 23:26:06 -- spdk/autotest.sh@194 -- # uname -s 00:44:25.923 23:26:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:44:25.923 23:26:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:44:25.923 23:26:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:44:25.923 23:26:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:44:25.923 23:26:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:25.923 23:26:06 -- common/autotest_common.sh@10 -- # set +x 00:44:25.923 23:26:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:25.923 23:26:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:25.923 23:26:06 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:25.923 23:26:06 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:25.923 23:26:06 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:25.923 23:26:06 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:25.923 23:26:06 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:25.923 23:26:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:25.923 23:26:06 -- common/autotest_common.sh@10 -- # set +x 00:44:25.923 23:26:06 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:25.923 23:26:06 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:26.189 23:26:06 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:26.189 23:26:06 -- common/autotest_common.sh@10 -- # set +x 00:44:28.721 INFO: APP EXITING 00:44:28.721 INFO: killing all VMs 00:44:28.721 INFO: killing vhost app 00:44:28.721 INFO: EXIT DONE 00:44:28.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:44:28.721 Waiting for block devices as requested 00:44:28.980 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:44:28.980 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:29.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:44:29.916 Cleaning 00:44:29.916 Removing: /var/run/dpdk/spdk0/config 00:44:29.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:29.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:29.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:29.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:29.916 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:29.916 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:29.916 Removing: /dev/shm/spdk_tgt_trace.pid56736 00:44:29.916 Removing: /var/run/dpdk/spdk0 00:44:29.916 Removing: /var/run/dpdk/spdk_pid56490 00:44:29.916 Removing: /var/run/dpdk/spdk_pid56736 00:44:29.916 Removing: /var/run/dpdk/spdk_pid56971 00:44:29.916 Removing: /var/run/dpdk/spdk_pid57075 00:44:29.916 Removing: /var/run/dpdk/spdk_pid57131 00:44:29.916 Removing: /var/run/dpdk/spdk_pid57270 00:44:29.916 Removing: /var/run/dpdk/spdk_pid57288 00:44:29.916 Removing: /var/run/dpdk/spdk_pid57498 00:44:29.916 Removing: /var/run/dpdk/spdk_pid57615 00:44:29.916 Removing: /var/run/dpdk/spdk_pid57722 00:44:29.916 Removing: /var/run/dpdk/spdk_pid57850 00:44:29.916 Removing: /var/run/dpdk/spdk_pid57958 00:44:29.916 Removing: /var/run/dpdk/spdk_pid57997 00:44:29.916 Removing: /var/run/dpdk/spdk_pid58034 00:44:29.916 Removing: /var/run/dpdk/spdk_pid58110 00:44:29.916 Removing: /var/run/dpdk/spdk_pid58216 00:44:29.916 Removing: /var/run/dpdk/spdk_pid58674 00:44:29.916 Removing: /var/run/dpdk/spdk_pid58757 00:44:30.174 Removing: /var/run/dpdk/spdk_pid58836 00:44:30.174 Removing: /var/run/dpdk/spdk_pid58852 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59014 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59036 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59184 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59207 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59275 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59299 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59368 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59386 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59589 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59631 00:44:30.174 Removing: /var/run/dpdk/spdk_pid59714 00:44:30.174 Removing: /var/run/dpdk/spdk_pid61091 00:44:30.174 Removing: /var/run/dpdk/spdk_pid61302 00:44:30.174 Removing: /var/run/dpdk/spdk_pid61448 00:44:30.174 Removing: /var/run/dpdk/spdk_pid62097 00:44:30.174 Removing: /var/run/dpdk/spdk_pid62308 00:44:30.174 Removing: /var/run/dpdk/spdk_pid62454 00:44:30.174 Removing: /var/run/dpdk/spdk_pid63097 00:44:30.174 Removing: /var/run/dpdk/spdk_pid63422 00:44:30.174 Removing: /var/run/dpdk/spdk_pid63562 00:44:30.174 Removing: /var/run/dpdk/spdk_pid64947 00:44:30.174 Removing: /var/run/dpdk/spdk_pid65200 00:44:30.174 Removing: /var/run/dpdk/spdk_pid65346 00:44:30.174 Removing: /var/run/dpdk/spdk_pid66731 00:44:30.174 Removing: /var/run/dpdk/spdk_pid66990 00:44:30.174 Removing: /var/run/dpdk/spdk_pid67135 00:44:30.174 Removing: /var/run/dpdk/spdk_pid68515 00:44:30.174 Removing: /var/run/dpdk/spdk_pid68962 00:44:30.174 Removing: /var/run/dpdk/spdk_pid69107 00:44:30.174 Removing: /var/run/dpdk/spdk_pid70587 00:44:30.174 Removing: /var/run/dpdk/spdk_pid70855 00:44:30.174 Removing: /var/run/dpdk/spdk_pid70995 00:44:30.174 Removing: /var/run/dpdk/spdk_pid72489 00:44:30.174 Removing: /var/run/dpdk/spdk_pid72748 00:44:30.174 Removing: /var/run/dpdk/spdk_pid72898 00:44:30.174 Removing: /var/run/dpdk/spdk_pid74386 00:44:30.174 Removing: /var/run/dpdk/spdk_pid74874 00:44:30.174 Removing: /var/run/dpdk/spdk_pid75024 00:44:30.174 Removing: /var/run/dpdk/spdk_pid75169 00:44:30.174 Removing: /var/run/dpdk/spdk_pid75600 00:44:30.174 Removing: /var/run/dpdk/spdk_pid76330 00:44:30.174 Removing: /var/run/dpdk/spdk_pid76719 00:44:30.174 Removing: /var/run/dpdk/spdk_pid77429 00:44:30.174 Removing: /var/run/dpdk/spdk_pid77885 00:44:30.174 Removing: /var/run/dpdk/spdk_pid78640 00:44:30.174 Removing: /var/run/dpdk/spdk_pid79050 00:44:30.174 Removing: /var/run/dpdk/spdk_pid81017 00:44:30.174 Removing: /var/run/dpdk/spdk_pid81455 00:44:30.174 Removing: /var/run/dpdk/spdk_pid81896 00:44:30.174 Removing: /var/run/dpdk/spdk_pid83995 00:44:30.174 Removing: /var/run/dpdk/spdk_pid84479 00:44:30.174 Removing: /var/run/dpdk/spdk_pid84997 00:44:30.174 Removing: /var/run/dpdk/spdk_pid86054 00:44:30.174 Removing: /var/run/dpdk/spdk_pid86381 00:44:30.174 Removing: /var/run/dpdk/spdk_pid87316 00:44:30.433 Removing: /var/run/dpdk/spdk_pid87640 00:44:30.433 Removing: /var/run/dpdk/spdk_pid88576 00:44:30.433 Removing: /var/run/dpdk/spdk_pid88899 00:44:30.433 Removing: /var/run/dpdk/spdk_pid89575 00:44:30.433 Removing: /var/run/dpdk/spdk_pid89864 00:44:30.433 Removing: /var/run/dpdk/spdk_pid89931 00:44:30.433 Removing: /var/run/dpdk/spdk_pid89973 00:44:30.433 Removing: /var/run/dpdk/spdk_pid90229 00:44:30.433 Removing: /var/run/dpdk/spdk_pid90408 00:44:30.433 Removing: /var/run/dpdk/spdk_pid90512 00:44:30.433 Removing: /var/run/dpdk/spdk_pid90605 00:44:30.433 Removing: /var/run/dpdk/spdk_pid90658 00:44:30.433 Removing: /var/run/dpdk/spdk_pid90689 00:44:30.433 Clean 00:44:30.433 23:26:10 -- common/autotest_common.sh@1453 -- # return 0 00:44:30.433 23:26:10 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:30.433 23:26:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:30.433 23:26:10 -- common/autotest_common.sh@10 -- # set +x 00:44:30.433 23:26:10 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:30.433 23:26:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:30.433 23:26:10 -- common/autotest_common.sh@10 -- # set +x 00:44:30.433 23:26:11 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:44:30.433 23:26:11 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:44:30.433 23:26:11 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:44:30.692 23:26:11 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:30.692 23:26:11 -- spdk/autotest.sh@398 -- # hostname 00:44:30.692 23:26:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:44:30.692 geninfo: WARNING: invalid characters removed from testname! 00:44:57.320 23:26:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:57.320 23:26:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:59.267 23:26:39 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:01.804 23:26:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:04.340 23:26:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:06.437 23:26:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:08.970 23:26:48 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:08.970 23:26:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:08.970 23:26:48 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:45:08.970 23:26:48 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:08.970 23:26:48 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:08.970 23:26:48 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:45:08.970 + [[ -n 5205 ]] 00:45:08.970 + sudo kill 5205 00:45:08.980 [Pipeline] } 00:45:08.995 [Pipeline] // timeout 00:45:09.000 [Pipeline] } 00:45:09.015 [Pipeline] // stage 00:45:09.020 [Pipeline] } 00:45:09.034 [Pipeline] // catchError 00:45:09.044 [Pipeline] stage 00:45:09.046 [Pipeline] { (Stop VM) 00:45:09.059 [Pipeline] sh 00:45:09.341 + vagrant halt 00:45:12.630 ==> default: Halting domain... 00:45:19.209 [Pipeline] sh 00:45:19.562 + vagrant destroy -f 00:45:22.847 ==> default: Removing domain... 00:45:22.859 [Pipeline] sh 00:45:23.143 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:45:23.152 [Pipeline] } 00:45:23.167 [Pipeline] // stage 00:45:23.172 [Pipeline] } 00:45:23.187 [Pipeline] // dir 00:45:23.192 [Pipeline] } 00:45:23.207 [Pipeline] // wrap 00:45:23.213 [Pipeline] } 00:45:23.227 [Pipeline] // catchError 00:45:23.236 [Pipeline] stage 00:45:23.238 [Pipeline] { (Epilogue) 00:45:23.252 [Pipeline] sh 00:45:23.534 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:28.847 [Pipeline] catchError 00:45:28.850 [Pipeline] { 00:45:28.865 [Pipeline] sh 00:45:29.154 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:29.154 Artifacts sizes are good 00:45:29.165 [Pipeline] } 00:45:29.177 [Pipeline] // catchError 00:45:29.186 [Pipeline] archiveArtifacts 00:45:29.193 Archiving artifacts 00:45:29.347 [Pipeline] cleanWs 00:45:29.364 [WS-CLEANUP] Deleting project workspace... 00:45:29.364 [WS-CLEANUP] Deferred wipeout is used... 00:45:29.372 [WS-CLEANUP] done 00:45:29.380 [Pipeline] } 00:45:29.423 [Pipeline] // stage 00:45:29.430 [Pipeline] } 00:45:29.443 [Pipeline] // node 00:45:29.446 [Pipeline] End of Pipeline 00:45:29.470 Finished: SUCCESS